libZSservicesZSamazonka-textractZSamazonka-textract
Copyright(c) 2013-2021 Brendan Hay
LicenseMozilla Public License, v. 2.0.
MaintainerBrendan Hay <brendan.g.hay+amazonka@gmail.com>
Stabilityauto-generated
Portabilitynon-portable (GHC extensions)
Safe HaskellNone

Amazonka.Textract.Lens

Description

 
Synopsis

Operations

DetectDocumentText

detectDocumentText_document :: Lens' DetectDocumentText Document Source #

The input document as base64-encoded bytes or an Amazon S3 object. If you use the AWS CLI to call Amazon Textract operations, you can't pass image bytes. The document must be an image in JPEG or PNG format.

If you're using an AWS SDK to call Amazon Textract, you might not need to base64-encode image bytes that are passed using the Bytes field.

detectDocumentTextResponse_documentMetadata :: Lens' DetectDocumentTextResponse (Maybe DocumentMetadata) Source #

Metadata about the document. It contains the number of pages that are detected in the document.

detectDocumentTextResponse_blocks :: Lens' DetectDocumentTextResponse (Maybe [Block]) Source #

An array of Block objects that contain the text that's detected in the document.

StartDocumentAnalysis

startDocumentAnalysis_jobTag :: Lens' StartDocumentAnalysis (Maybe Text) Source #

An identifier that you specify that's included in the completion notification published to the Amazon SNS topic. For example, you can use JobTag to identify the type of document that the completion notification corresponds to (such as a tax form or a receipt).

startDocumentAnalysis_notificationChannel :: Lens' StartDocumentAnalysis (Maybe NotificationChannel) Source #

The Amazon SNS topic ARN that you want Amazon Textract to publish the completion status of the operation to.

startDocumentAnalysis_kmsKeyId :: Lens' StartDocumentAnalysis (Maybe Text) Source #

The KMS key used to encrypt the inference results. This can be in either Key ID or Key Alias format. When a KMS key is provided, the KMS key will be used for server-side encryption of the objects in the customer bucket. When this parameter is not enabled, the result will be encrypted server side,using SSE-S3.

startDocumentAnalysis_outputConfig :: Lens' StartDocumentAnalysis (Maybe OutputConfig) Source #

Sets if the output will go to a customer defined bucket. By default, Amazon Textract will save the results internally to be accessed by the GetDocumentAnalysis operation.

startDocumentAnalysis_clientRequestToken :: Lens' StartDocumentAnalysis (Maybe Text) Source #

The idempotent token that you use to identify the start request. If you use the same token with multiple StartDocumentAnalysis requests, the same JobId is returned. Use ClientRequestToken to prevent the same job from being accidentally started more than once. For more information, see Calling Amazon Textract Asynchronous Operations.

startDocumentAnalysis_featureTypes :: Lens' StartDocumentAnalysis [FeatureType] Source #

A list of the types of analysis to perform. Add TABLES to the list to return information about the tables that are detected in the input document. Add FORMS to return detected form data. To perform both types of analysis, add TABLES and FORMS to FeatureTypes. All lines and words detected in the document are included in the response (including text that isn't related to the value of FeatureTypes).

startDocumentAnalysisResponse_jobId :: Lens' StartDocumentAnalysisResponse (Maybe Text) Source #

The identifier for the document text detection job. Use JobId to identify the job in a subsequent call to GetDocumentAnalysis. A JobId value is only valid for 7 days.

AnalyzeDocument

analyzeDocument_humanLoopConfig :: Lens' AnalyzeDocument (Maybe HumanLoopConfig) Source #

Sets the configuration for the human in the loop workflow for analyzing documents.

analyzeDocument_document :: Lens' AnalyzeDocument Document Source #

The input document as base64-encoded bytes or an Amazon S3 object. If you use the AWS CLI to call Amazon Textract operations, you can't pass image bytes. The document must be an image in JPEG or PNG format.

If you're using an AWS SDK to call Amazon Textract, you might not need to base64-encode image bytes that are passed using the Bytes field.

analyzeDocument_featureTypes :: Lens' AnalyzeDocument [FeatureType] Source #

A list of the types of analysis to perform. Add TABLES to the list to return information about the tables that are detected in the input document. Add FORMS to return detected form data. To perform both types of analysis, add TABLES and FORMS to FeatureTypes. All lines and words detected in the document are included in the response (including text that isn't related to the value of FeatureTypes).

analyzeDocumentResponse_documentMetadata :: Lens' AnalyzeDocumentResponse (Maybe DocumentMetadata) Source #

Metadata about the analyzed document. An example is the number of pages.

analyzeDocumentResponse_blocks :: Lens' AnalyzeDocumentResponse (Maybe [Block]) Source #

The items that are detected and analyzed by AnalyzeDocument.

analyzeDocumentResponse_analyzeDocumentModelVersion :: Lens' AnalyzeDocumentResponse (Maybe Text) Source #

The version of the model used to analyze the document.

GetDocumentTextDetection

getDocumentTextDetection_nextToken :: Lens' GetDocumentTextDetection (Maybe Text) Source #

If the previous response was incomplete (because there are more blocks to retrieve), Amazon Textract returns a pagination token in the response. You can use this pagination token to retrieve the next set of blocks.

getDocumentTextDetection_maxResults :: Lens' GetDocumentTextDetection (Maybe Natural) Source #

The maximum number of results to return per paginated call. The largest value you can specify is 1,000. If you specify a value greater than 1,000, a maximum of 1,000 results is returned. The default value is 1,000.

getDocumentTextDetection_jobId :: Lens' GetDocumentTextDetection Text Source #

A unique identifier for the text detection job. The JobId is returned from StartDocumentTextDetection. A JobId value is only valid for 7 days.

getDocumentTextDetectionResponse_documentMetadata :: Lens' GetDocumentTextDetectionResponse (Maybe DocumentMetadata) Source #

Information about a document that Amazon Textract processed. DocumentMetadata is returned in every page of paginated responses from an Amazon Textract video operation.

getDocumentTextDetectionResponse_warnings :: Lens' GetDocumentTextDetectionResponse (Maybe [Warning]) Source #

A list of warnings that occurred during the text-detection operation for the document.

getDocumentTextDetectionResponse_nextToken :: Lens' GetDocumentTextDetectionResponse (Maybe Text) Source #

If the response is truncated, Amazon Textract returns this token. You can use this token in the subsequent request to retrieve the next set of text-detection results.

getDocumentTextDetectionResponse_statusMessage :: Lens' GetDocumentTextDetectionResponse (Maybe Text) Source #

Returns if the detection job could not be completed. Contains explanation for what error occured.

AnalyzeExpense

StartDocumentTextDetection

startDocumentTextDetection_jobTag :: Lens' StartDocumentTextDetection (Maybe Text) Source #

An identifier that you specify that's included in the completion notification published to the Amazon SNS topic. For example, you can use JobTag to identify the type of document that the completion notification corresponds to (such as a tax form or a receipt).

startDocumentTextDetection_notificationChannel :: Lens' StartDocumentTextDetection (Maybe NotificationChannel) Source #

The Amazon SNS topic ARN that you want Amazon Textract to publish the completion status of the operation to.

startDocumentTextDetection_kmsKeyId :: Lens' StartDocumentTextDetection (Maybe Text) Source #

The KMS key used to encrypt the inference results. This can be in either Key ID or Key Alias format. When a KMS key is provided, the KMS key will be used for server-side encryption of the objects in the customer bucket. When this parameter is not enabled, the result will be encrypted server side,using SSE-S3.

startDocumentTextDetection_outputConfig :: Lens' StartDocumentTextDetection (Maybe OutputConfig) Source #

Sets if the output will go to a customer defined bucket. By default Amazon Textract will save the results internally to be accessed with the GetDocumentTextDetection operation.

startDocumentTextDetection_clientRequestToken :: Lens' StartDocumentTextDetection (Maybe Text) Source #

The idempotent token that's used to identify the start request. If you use the same token with multiple StartDocumentTextDetection requests, the same JobId is returned. Use ClientRequestToken to prevent the same job from being accidentally started more than once. For more information, see Calling Amazon Textract Asynchronous Operations.

startDocumentTextDetectionResponse_jobId :: Lens' StartDocumentTextDetectionResponse (Maybe Text) Source #

The identifier of the text detection job for the document. Use JobId to identify the job in a subsequent call to GetDocumentTextDetection. A JobId value is only valid for 7 days.

GetDocumentAnalysis

getDocumentAnalysis_nextToken :: Lens' GetDocumentAnalysis (Maybe Text) Source #

If the previous response was incomplete (because there are more blocks to retrieve), Amazon Textract returns a pagination token in the response. You can use this pagination token to retrieve the next set of blocks.

getDocumentAnalysis_maxResults :: Lens' GetDocumentAnalysis (Maybe Natural) Source #

The maximum number of results to return per paginated call. The largest value that you can specify is 1,000. If you specify a value greater than 1,000, a maximum of 1,000 results is returned. The default value is 1,000.

getDocumentAnalysis_jobId :: Lens' GetDocumentAnalysis Text Source #

A unique identifier for the text-detection job. The JobId is returned from StartDocumentAnalysis. A JobId value is only valid for 7 days.

getDocumentAnalysisResponse_documentMetadata :: Lens' GetDocumentAnalysisResponse (Maybe DocumentMetadata) Source #

Information about a document that Amazon Textract processed. DocumentMetadata is returned in every page of paginated responses from an Amazon Textract video operation.

getDocumentAnalysisResponse_blocks :: Lens' GetDocumentAnalysisResponse (Maybe [Block]) Source #

The results of the text-analysis operation.

getDocumentAnalysisResponse_warnings :: Lens' GetDocumentAnalysisResponse (Maybe [Warning]) Source #

A list of warnings that occurred during the document-analysis operation.

getDocumentAnalysisResponse_nextToken :: Lens' GetDocumentAnalysisResponse (Maybe Text) Source #

If the response is truncated, Amazon Textract returns this token. You can use this token in the subsequent request to retrieve the next set of text detection results.

getDocumentAnalysisResponse_statusMessage :: Lens' GetDocumentAnalysisResponse (Maybe Text) Source #

Returns if the detection job could not be completed. Contains explanation for what error occured.

Types

Block

block_columnSpan :: Lens' Block (Maybe Natural) Source #

The number of columns that a table cell spans. Currently this value is always 1, even if the number of columns spanned is greater than 1. ColumnSpan isn't returned by DetectDocumentText and GetDocumentTextDetection.

block_text :: Lens' Block (Maybe Text) Source #

The word or line of text that's recognized by Amazon Textract.

block_entityTypes :: Lens' Block (Maybe [EntityType]) Source #

The type of entity. The following can be returned:

  • KEY - An identifier for a field on the document.
  • VALUE - The field text.

EntityTypes isn't returned by DetectDocumentText and GetDocumentTextDetection.

block_columnIndex :: Lens' Block (Maybe Natural) Source #

The column in which a table cell appears. The first column position is 1. ColumnIndex isn't returned by DetectDocumentText and GetDocumentTextDetection.

block_page :: Lens' Block (Maybe Natural) Source #

The page on which a block was detected. Page is returned by asynchronous operations. Page values greater than 1 are only returned for multipage documents that are in PDF format. A scanned image (JPEG/PNG), even if it contains multiple document pages, is considered to be a single-page document. The value of Page is always 1. Synchronous operations don't return Page because every input document is considered to be a single-page document.

block_rowSpan :: Lens' Block (Maybe Natural) Source #

The number of rows that a table cell spans. Currently this value is always 1, even if the number of rows spanned is greater than 1. RowSpan isn't returned by DetectDocumentText and GetDocumentTextDetection.

block_selectionStatus :: Lens' Block (Maybe SelectionStatus) Source #

The selection status of a selection element, such as an option button or check box.

block_rowIndex :: Lens' Block (Maybe Natural) Source #

The row in which a table cell is located. The first row position is 1. RowIndex isn't returned by DetectDocumentText and GetDocumentTextDetection.

block_confidence :: Lens' Block (Maybe Double) Source #

The confidence score that Amazon Textract has in the accuracy of the recognized text and the accuracy of the geometry points around the recognized text.

block_relationships :: Lens' Block (Maybe [Relationship]) Source #

A list of child blocks of the current block. For example, a LINE object has child blocks for each WORD block that's part of the line of text. There aren't Relationship objects in the list for relationships that don't exist, such as when the current block has no child blocks. The list size can be the following:

  • 0 - The block has no child blocks.
  • 1 - The block has child blocks.

block_geometry :: Lens' Block (Maybe Geometry) Source #

The location of the recognized text on the image. It includes an axis-aligned, coarse bounding box that surrounds the text, and a finer-grain polygon for more accurate spatial information.

block_textType :: Lens' Block (Maybe TextType) Source #

The kind of text that Amazon Textract has detected. Can check for handwritten text and printed text.

block_id :: Lens' Block (Maybe Text) Source #

The identifier for the recognized text. The identifier is only unique for a single operation.

block_blockType :: Lens' Block (Maybe BlockType) Source #

The type of text item that's recognized. In operations for text detection, the following types are returned:

  • PAGE - Contains a list of the LINE Block objects that are detected on a document page.
  • WORD - A word detected on a document page. A word is one or more ISO basic Latin script characters that aren't separated by spaces.
  • LINE - A string of tab-delimited, contiguous words that are detected on a document page.

In text analysis operations, the following types are returned:

  • PAGE - Contains a list of child Block objects that are detected on a document page.
  • KEY_VALUE_SET - Stores the KEY and VALUE Block objects for linked text that's detected on a document page. Use the EntityType field to determine if a KEY_VALUE_SET object is a KEY Block object or a VALUE Block object.
  • WORD - A word that's detected on a document page. A word is one or more ISO basic Latin script characters that aren't separated by spaces.
  • LINE - A string of tab-delimited, contiguous words that are detected on a document page.
  • TABLE - A table that's detected on a document page. A table is grid-based information with two or more rows or columns, with a cell span of one row and one column each.
  • CELL - A cell within a detected table. The cell is the parent of the block that contains the text in the cell.
  • SELECTION_ELEMENT - A selection element such as an option button (radio button) or a check box that's detected on a document page. Use the value of SelectionStatus to determine the status of the selection element.

BoundingBox

boundingBox_height :: Lens' BoundingBox (Maybe Double) Source #

The height of the bounding box as a ratio of the overall document page height.

boundingBox_left :: Lens' BoundingBox (Maybe Double) Source #

The left coordinate of the bounding box as a ratio of overall document page width.

boundingBox_width :: Lens' BoundingBox (Maybe Double) Source #

The width of the bounding box as a ratio of the overall document page width.

boundingBox_top :: Lens' BoundingBox (Maybe Double) Source #

The top coordinate of the bounding box as a ratio of overall document page height.

Document

document_s3Object :: Lens' Document (Maybe S3Object) Source #

Identifies an S3 object as the document source. The maximum size of a document that's stored in an S3 bucket is 5 MB.

document_bytes :: Lens' Document (Maybe ByteString) Source #

A blob of base64-encoded document bytes. The maximum size of a document that's provided in a blob of bytes is 5 MB. The document bytes must be in PNG or JPEG format.

If you're using an AWS SDK to call Amazon Textract, you might not need to base64-encode image bytes passed using the Bytes field.-- -- Note: This Lens automatically encodes and decodes Base64 data. -- The underlying isomorphism will encode to Base64 representation during -- serialisation, and decode from Base64 representation during deserialisation. -- This Lens accepts and returns only raw unencoded data.

DocumentLocation

documentLocation_s3Object :: Lens' DocumentLocation (Maybe S3Object) Source #

The Amazon S3 bucket that contains the input document.

DocumentMetadata

documentMetadata_pages :: Lens' DocumentMetadata (Maybe Natural) Source #

The number of pages that are detected in the document.

ExpenseDetection

expenseDetection_text :: Lens' ExpenseDetection (Maybe Text) Source #

The word or line of text recognized by Amazon Textract

expenseDetection_confidence :: Lens' ExpenseDetection (Maybe Double) Source #

The confidence in detection, as a percentage

ExpenseDocument

expenseDocument_lineItemGroups :: Lens' ExpenseDocument (Maybe [LineItemGroup]) Source #

Information detected on each table of a document, seperated into LineItems.

expenseDocument_summaryFields :: Lens' ExpenseDocument (Maybe [ExpenseField]) Source #

Any information found outside of a table by Amazon Textract.

expenseDocument_expenseIndex :: Lens' ExpenseDocument (Maybe Natural) Source #

Denotes which invoice or receipt in the document the information is coming from. First document will be 1, the second 2, and so on.

ExpenseField

expenseField_labelDetection :: Lens' ExpenseField (Maybe ExpenseDetection) Source #

The explicitly stated label of a detected element.

expenseField_valueDetection :: Lens' ExpenseField (Maybe ExpenseDetection) Source #

The value of a detected element. Present in explicit and implicit elements.

expenseField_type :: Lens' ExpenseField (Maybe ExpenseType) Source #

The implied label of a detected element. Present alongside LabelDetection for explicit elements.

expenseField_pageNumber :: Lens' ExpenseField (Maybe Natural) Source #

The page number the value was detected on.

ExpenseType

expenseType_text :: Lens' ExpenseType (Maybe Text) Source #

The word or line of text detected by Amazon Textract.

expenseType_confidence :: Lens' ExpenseType (Maybe Double) Source #

The confidence of accuracy, as a percentage.

Geometry

geometry_boundingBox :: Lens' Geometry (Maybe BoundingBox) Source #

An axis-aligned coarse representation of the location of the recognized item on the document page.

geometry_polygon :: Lens' Geometry (Maybe [Point]) Source #

Within the bounding box, a fine-grained polygon around the recognized item.

HumanLoopActivationOutput

humanLoopActivationOutput_humanLoopArn :: Lens' HumanLoopActivationOutput (Maybe Text) Source #

The Amazon Resource Name (ARN) of the HumanLoop created.

humanLoopActivationOutput_humanLoopActivationConditionsEvaluationResults :: Lens' HumanLoopActivationOutput (Maybe Text) Source #

Shows the result of condition evaluations, including those conditions which activated a human review.

HumanLoopConfig

humanLoopConfig_humanLoopName :: Lens' HumanLoopConfig Text Source #

The name of the human workflow used for this image. This should be kept unique within a region.

humanLoopConfig_flowDefinitionArn :: Lens' HumanLoopConfig Text Source #

The Amazon Resource Name (ARN) of the flow definition.

HumanLoopDataAttributes

humanLoopDataAttributes_contentClassifiers :: Lens' HumanLoopDataAttributes (Maybe [ContentClassifier]) Source #

Sets whether the input image is free of personally identifiable information or adult content.

LineItemFields

lineItemFields_lineItemExpenseFields :: Lens' LineItemFields (Maybe [ExpenseField]) Source #

ExpenseFields used to show information from detected lines on a table.

LineItemGroup

lineItemGroup_lineItems :: Lens' LineItemGroup (Maybe [LineItemFields]) Source #

The breakdown of information on a particular line of a table.

lineItemGroup_lineItemGroupIndex :: Lens' LineItemGroup (Maybe Natural) Source #

The number used to identify a specific table in a document. The first table encountered will have a LineItemGroupIndex of 1, the second 2, etc.

NotificationChannel

notificationChannel_sNSTopicArn :: Lens' NotificationChannel Text Source #

The Amazon SNS topic that Amazon Textract posts the completion status to.

notificationChannel_roleArn :: Lens' NotificationChannel Text Source #

The Amazon Resource Name (ARN) of an IAM role that gives Amazon Textract publishing permissions to the Amazon SNS topic.

OutputConfig

outputConfig_s3Prefix :: Lens' OutputConfig (Maybe Text) Source #

The prefix of the object key that the output will be saved to. When not enabled, the prefix will be “textract_output".

outputConfig_s3Bucket :: Lens' OutputConfig Text Source #

The name of the bucket your output will go to.

Point

point_x :: Lens' Point (Maybe Double) Source #

The value of the X coordinate for a point on a Polygon.

point_y :: Lens' Point (Maybe Double) Source #

The value of the Y coordinate for a point on a Polygon.

Relationship

relationship_ids :: Lens' Relationship (Maybe [Text]) Source #

An array of IDs for related blocks. You can get the type of the relationship from the Type element.

relationship_type :: Lens' Relationship (Maybe RelationshipType) Source #

The type of relationship that the blocks in the IDs array have with the current block. The relationship can be VALUE or CHILD. A relationship of type VALUE is a list that contains the ID of the VALUE block that's associated with the KEY of a key-value pair. A relationship of type CHILD is a list of IDs that identify WORD blocks in the case of lines Cell blocks in the case of Tables, and WORD blocks in the case of Selection Elements.

S3Object

s3Object_bucket :: Lens' S3Object (Maybe Text) Source #

The name of the S3 bucket. Note that the # character is not valid in the file name.

s3Object_name :: Lens' S3Object (Maybe Text) Source #

The file name of the input document. Synchronous operations can use image files that are in JPEG or PNG format. Asynchronous operations also support PDF format files.

s3Object_version :: Lens' S3Object (Maybe Text) Source #

If the bucket has versioning enabled, you can specify the object version.

Warning

warning_pages :: Lens' Warning (Maybe [Natural]) Source #

A list of the pages that the warning applies to.

warning_errorCode :: Lens' Warning (Maybe Text) Source #

The error code for the warning.