| Copyright | (c) 2013-2021 Brendan Hay |
|---|---|
| License | Mozilla Public License, v. 2.0. |
| Maintainer | Brendan Hay <brendan.g.hay+amazonka@gmail.com> |
| Stability | auto-generated |
| Portability | non-portable (GHC extensions) |
| Safe Haskell | None |
Amazonka.SageMaker.Types.InputConfig
Description
Synopsis
- data InputConfig = InputConfig' {}
- newInputConfig :: Text -> Text -> Framework -> InputConfig
- inputConfig_frameworkVersion :: Lens' InputConfig (Maybe Text)
- inputConfig_s3Uri :: Lens' InputConfig Text
- inputConfig_dataInputConfig :: Lens' InputConfig Text
- inputConfig_framework :: Lens' InputConfig Framework
Documentation
data InputConfig Source #
Contains information about the location of input model artifacts, the name and shape of the expected data inputs, and the framework in which the model was trained.
See: newInputConfig smart constructor.
Constructors
| InputConfig' | |
Fields
| |
Instances
Arguments
| :: Text | |
| -> Text | |
| -> Framework | |
| -> InputConfig |
Create a value of InputConfig with all optional fields omitted.
Use generic-lens or optics to modify other optional fields.
The following record fields are available, with the corresponding lenses provided for backwards compatibility:
$sel:frameworkVersion:InputConfig', inputConfig_frameworkVersion - Specifies the framework version to use.
This API field is only supported for PyTorch framework versions 1.4,
1.5, and 1.6 for cloud instance target devices: ml_c4, ml_c5,
ml_m4, ml_m5, ml_p2, ml_p3, and ml_g4dn.
$sel:s3Uri:InputConfig', inputConfig_s3Uri - The S3 path where the model artifacts, which result from model training,
are stored. This path must point to a single gzip compressed tar archive
(.tar.gz suffix).
$sel:dataInputConfig:InputConfig', inputConfig_dataInputConfig - Specifies the name and shape of the expected data inputs for your
trained model with a JSON dictionary form. The data inputs are
InputConfig$Framework specific.
TensorFlow: You must specify the name and shape (NHWC format) of the expected data inputs using a dictionary format for your trained model. The dictionary formats required for the console and CLI are different.Examples for one input:
- If using the console,
{"input":[1,1024,1024,3]} - If using the CLI,
{\"input\":[1,1024,1024,3]}
- If using the console,
Examples for two inputs:
- If using the console,
{"data1": [1,28,28,1], "data2":[1,28,28,1]} - If using the CLI,
{\"data1\": [1,28,28,1], \"data2\":[1,28,28,1]}
- If using the console,
KERAS: You must specify the name and shape (NCHW format) of expected data inputs using a dictionary format for your trained model. Note that while Keras model artifacts should be uploaded in NHWC (channel-last) format,DataInputConfigshould be specified in NCHW (channel-first) format. The dictionary formats required for the console and CLI are different.Examples for one input:
- If using the console,
{"input_1":[1,3,224,224]} - If using the CLI,
{\"input_1\":[1,3,224,224]}
- If using the console,
Examples for two inputs:
- If using the console,
{"input_1": [1,3,224,224], "input_2":[1,3,224,224]} - If using the CLI,
{\"input_1\": [1,3,224,224], \"input_2\":[1,3,224,224]}
- If using the console,
MXNET/ONNX/DARKNET: You must specify the name and shape (NCHW format) of the expected data inputs in order using a dictionary format for your trained model. The dictionary formats required for the console and CLI are different.Examples for one input:
- If using the console,
{"data":[1,3,1024,1024]} - If using the CLI,
{\"data\":[1,3,1024,1024]}
- If using the console,
Examples for two inputs:
- If using the console,
{"var1": [1,1,28,28], "var2":[1,1,28,28]} - If using the CLI,
{\"var1\": [1,1,28,28], \"var2\":[1,1,28,28]}
- If using the console,
PyTorch: You can either specify the name and shape (NCHW format) of expected data inputs in order using a dictionary format for your trained model or you can specify the shape only using a list format. The dictionary formats required for the console and CLI are different. The list formats for the console and CLI are the same.Examples for one input in dictionary format:
- If using the console,
{"input0":[1,3,224,224]} - If using the CLI,
{\"input0\":[1,3,224,224]}
- If using the console,
- Example for one input in list format:
[[1,3,224,224]] Examples for two inputs in dictionary format:
- If using the console,
{"input0":[1,3,224,224], "input1":[1,3,224,224]} - If using the CLI,
{\"input0\":[1,3,224,224], \"input1\":[1,3,224,224]}
- If using the console,
- Example for two inputs in list format:
[[1,3,224,224], [1,3,224,224]]
XGBOOST: input data name and shape are not needed.
DataInputConfig supports the following parameters for CoreML
OutputConfig$TargetDevice (ML Model format):
shape: Input shape, for example{"input_1": {"shape": [1,224,224,3]}}. In addition to static input shapes, CoreML converter supports Flexible input shapes:- Range Dimension. You can use the Range Dimension feature if you
know the input shape will be within some specific interval in
that dimension, for example:
{"input_1": {"shape": ["1..10", 224, 224, 3]}} - Enumerated shapes. Sometimes, the models are trained to work
only on a select set of inputs. You can enumerate all supported
input shapes, for example:
{"input_1": {"shape": [[1, 224, 224, 3], [1, 160, 160, 3]]}}
- Range Dimension. You can use the Range Dimension feature if you
know the input shape will be within some specific interval in
that dimension, for example:
default_shape: Default input shape. You can set a default shape during conversion for both Range Dimension and Enumerated Shapes. For example{"input_1": {"shape": ["1..10", 224, 224, 3], "default_shape": [1, 224, 224, 3]}}type: Input type. Allowed values:ImageandTensor. By default, the converter generates an ML Model with inputs of type Tensor (MultiArray). User can set input type to be Image. Image input type requires additional input parameters such asbiasandscale.bias: If the input type is an Image, you need to provide the bias vector.scale: If the input type is an Image, you need to provide a scale factor.
CoreML ClassifierConfig parameters can be specified using
OutputConfig$CompilerOptions. CoreML converter supports Tensorflow and
PyTorch models. CoreML conversion examples:
Tensor type input:
"DataInputConfig": {"input_1": {"shape": [[1,224,224,3], [1,160,160,3]], "default_shape": [1,224,224,3]}}
Tensor type input without input name (PyTorch):
"DataInputConfig": [{"shape": [[1,3,224,224], [1,3,160,160]], "default_shape": [1,3,224,224]}]
Image type input:
"DataInputConfig": {"input_1": {"shape": [[1,224,224,3], [1,160,160,3]], "default_shape": [1,224,224,3], "type": "Image", "bias": [-1,-1,-1], "scale": 0.007843137255}}"CompilerOptions": {"class_labels": "imagenet_labels_1000.txt"}
Image type input without input name (PyTorch):
"DataInputConfig": [{"shape": [[1,3,224,224], [1,3,160,160]], "default_shape": [1,3,224,224], "type": "Image", "bias": [-1,-1,-1], "scale": 0.007843137255}]"CompilerOptions": {"class_labels": "imagenet_labels_1000.txt"}
Depending on the model format, DataInputConfig requires the following
parameters for ml_eia2
OutputConfig:TargetDevice.
For TensorFlow models saved in the SavedModel format, specify the input names from
signature_def_keyand the input model shapes forDataInputConfig. Specify thesignature_def_keyin OutputConfig:CompilerOptions if the model does not use TensorFlow's default signature def key. For example:"DataInputConfig": {"inputs": [1, 224, 224, 3]}"CompilerOptions": {"signature_def_key": "serving_custom"}
For TensorFlow models saved as a frozen graph, specify the input tensor names and shapes in
DataInputConfigand the output tensor names foroutput_namesin OutputConfig:CompilerOptions . For example:"DataInputConfig": {"input_tensor:0": [1, 224, 224, 3]}"CompilerOptions": {"output_names": ["output_tensor:0"]}
$sel:framework:InputConfig', inputConfig_framework - Identifies the framework in which the model was trained. For example:
TENSORFLOW.
inputConfig_frameworkVersion :: Lens' InputConfig (Maybe Text) Source #
Specifies the framework version to use.
This API field is only supported for PyTorch framework versions 1.4,
1.5, and 1.6 for cloud instance target devices: ml_c4, ml_c5,
ml_m4, ml_m5, ml_p2, ml_p3, and ml_g4dn.
inputConfig_s3Uri :: Lens' InputConfig Text Source #
The S3 path where the model artifacts, which result from model training, are stored. This path must point to a single gzip compressed tar archive (.tar.gz suffix).
inputConfig_dataInputConfig :: Lens' InputConfig Text Source #
Specifies the name and shape of the expected data inputs for your trained model with a JSON dictionary form. The data inputs are InputConfig$Framework specific.
TensorFlow: You must specify the name and shape (NHWC format) of the expected data inputs using a dictionary format for your trained model. The dictionary formats required for the console and CLI are different.Examples for one input:
- If using the console,
{"input":[1,1024,1024,3]} - If using the CLI,
{\"input\":[1,1024,1024,3]}
- If using the console,
Examples for two inputs:
- If using the console,
{"data1": [1,28,28,1], "data2":[1,28,28,1]} - If using the CLI,
{\"data1\": [1,28,28,1], \"data2\":[1,28,28,1]}
- If using the console,
KERAS: You must specify the name and shape (NCHW format) of expected data inputs using a dictionary format for your trained model. Note that while Keras model artifacts should be uploaded in NHWC (channel-last) format,DataInputConfigshould be specified in NCHW (channel-first) format. The dictionary formats required for the console and CLI are different.Examples for one input:
- If using the console,
{"input_1":[1,3,224,224]} - If using the CLI,
{\"input_1\":[1,3,224,224]}
- If using the console,
Examples for two inputs:
- If using the console,
{"input_1": [1,3,224,224], "input_2":[1,3,224,224]} - If using the CLI,
{\"input_1\": [1,3,224,224], \"input_2\":[1,3,224,224]}
- If using the console,
MXNET/ONNX/DARKNET: You must specify the name and shape (NCHW format) of the expected data inputs in order using a dictionary format for your trained model. The dictionary formats required for the console and CLI are different.Examples for one input:
- If using the console,
{"data":[1,3,1024,1024]} - If using the CLI,
{\"data\":[1,3,1024,1024]}
- If using the console,
Examples for two inputs:
- If using the console,
{"var1": [1,1,28,28], "var2":[1,1,28,28]} - If using the CLI,
{\"var1\": [1,1,28,28], \"var2\":[1,1,28,28]}
- If using the console,
PyTorch: You can either specify the name and shape (NCHW format) of expected data inputs in order using a dictionary format for your trained model or you can specify the shape only using a list format. The dictionary formats required for the console and CLI are different. The list formats for the console and CLI are the same.Examples for one input in dictionary format:
- If using the console,
{"input0":[1,3,224,224]} - If using the CLI,
{\"input0\":[1,3,224,224]}
- If using the console,
- Example for one input in list format:
[[1,3,224,224]] Examples for two inputs in dictionary format:
- If using the console,
{"input0":[1,3,224,224], "input1":[1,3,224,224]} - If using the CLI,
{\"input0\":[1,3,224,224], \"input1\":[1,3,224,224]}
- If using the console,
- Example for two inputs in list format:
[[1,3,224,224], [1,3,224,224]]
XGBOOST: input data name and shape are not needed.
DataInputConfig supports the following parameters for CoreML
OutputConfig$TargetDevice (ML Model format):
shape: Input shape, for example{"input_1": {"shape": [1,224,224,3]}}. In addition to static input shapes, CoreML converter supports Flexible input shapes:- Range Dimension. You can use the Range Dimension feature if you
know the input shape will be within some specific interval in
that dimension, for example:
{"input_1": {"shape": ["1..10", 224, 224, 3]}} - Enumerated shapes. Sometimes, the models are trained to work
only on a select set of inputs. You can enumerate all supported
input shapes, for example:
{"input_1": {"shape": [[1, 224, 224, 3], [1, 160, 160, 3]]}}
- Range Dimension. You can use the Range Dimension feature if you
know the input shape will be within some specific interval in
that dimension, for example:
default_shape: Default input shape. You can set a default shape during conversion for both Range Dimension and Enumerated Shapes. For example{"input_1": {"shape": ["1..10", 224, 224, 3], "default_shape": [1, 224, 224, 3]}}type: Input type. Allowed values:ImageandTensor. By default, the converter generates an ML Model with inputs of type Tensor (MultiArray). User can set input type to be Image. Image input type requires additional input parameters such asbiasandscale.bias: If the input type is an Image, you need to provide the bias vector.scale: If the input type is an Image, you need to provide a scale factor.
CoreML ClassifierConfig parameters can be specified using
OutputConfig$CompilerOptions. CoreML converter supports Tensorflow and
PyTorch models. CoreML conversion examples:
Tensor type input:
"DataInputConfig": {"input_1": {"shape": [[1,224,224,3], [1,160,160,3]], "default_shape": [1,224,224,3]}}
Tensor type input without input name (PyTorch):
"DataInputConfig": [{"shape": [[1,3,224,224], [1,3,160,160]], "default_shape": [1,3,224,224]}]
Image type input:
"DataInputConfig": {"input_1": {"shape": [[1,224,224,3], [1,160,160,3]], "default_shape": [1,224,224,3], "type": "Image", "bias": [-1,-1,-1], "scale": 0.007843137255}}"CompilerOptions": {"class_labels": "imagenet_labels_1000.txt"}
Image type input without input name (PyTorch):
"DataInputConfig": [{"shape": [[1,3,224,224], [1,3,160,160]], "default_shape": [1,3,224,224], "type": "Image", "bias": [-1,-1,-1], "scale": 0.007843137255}]"CompilerOptions": {"class_labels": "imagenet_labels_1000.txt"}
Depending on the model format, DataInputConfig requires the following
parameters for ml_eia2
OutputConfig:TargetDevice.
For TensorFlow models saved in the SavedModel format, specify the input names from
signature_def_keyand the input model shapes forDataInputConfig. Specify thesignature_def_keyin OutputConfig:CompilerOptions if the model does not use TensorFlow's default signature def key. For example:"DataInputConfig": {"inputs": [1, 224, 224, 3]}"CompilerOptions": {"signature_def_key": "serving_custom"}
For TensorFlow models saved as a frozen graph, specify the input tensor names and shapes in
DataInputConfigand the output tensor names foroutput_namesin OutputConfig:CompilerOptions . For example:"DataInputConfig": {"input_tensor:0": [1, 224, 224, 3]}"CompilerOptions": {"output_names": ["output_tensor:0"]}
inputConfig_framework :: Lens' InputConfig Framework Source #
Identifies the framework in which the model was trained. For example: TENSORFLOW.