Oracle Cloud Infrastructure v2.33.0 published on Thursday, May 1, 2025 by Pulumi
oci.AiLanguage.getModel
Explore with Pulumi AI
This data source provides details about a specific Model resource in Oracle Cloud Infrastructure Ai Language service.
Gets a model by identifier
Example Usage
Coming soon!
Coming soon!
Coming soon!
Coming soon!
Coming soon!
variables:
  testModel:
    fn::invoke:
      function: oci:AiLanguage:getModel
      arguments:
        modelId: ${testModelOciAiLanguageModel.id}
Using getModel
Two invocation forms are available. The direct form accepts plain arguments and either blocks until the result value is available, or returns a Promise-wrapped result. The output form accepts Input-wrapped arguments and returns an Output-wrapped result.
function getModel(args: GetModelArgs, opts?: InvokeOptions): Promise<GetModelResult>
function getModelOutput(args: GetModelOutputArgs, opts?: InvokeOptions): Output<GetModelResult>def get_model(id: Optional[str] = None,
              opts: Optional[InvokeOptions] = None) -> GetModelResult
def get_model_output(id: Optional[pulumi.Input[str]] = None,
              opts: Optional[InvokeOptions] = None) -> Output[GetModelResult]func LookupModel(ctx *Context, args *LookupModelArgs, opts ...InvokeOption) (*LookupModelResult, error)
func LookupModelOutput(ctx *Context, args *LookupModelOutputArgs, opts ...InvokeOption) LookupModelResultOutput> Note: This function is named LookupModel in the Go SDK.
public static class GetModel 
{
    public static Task<GetModelResult> InvokeAsync(GetModelArgs args, InvokeOptions? opts = null)
    public static Output<GetModelResult> Invoke(GetModelInvokeArgs args, InvokeOptions? opts = null)
}public static CompletableFuture<GetModelResult> getModel(GetModelArgs args, InvokeOptions options)
public static Output<GetModelResult> getModel(GetModelArgs args, InvokeOptions options)
fn::invoke:
  function: oci:AiLanguage/getModel:getModel
  arguments:
    # arguments dictionaryThe following arguments are supported:
- Id string
- Unique identifier model OCID of a model that is immutable on creation
- Id string
- Unique identifier model OCID of a model that is immutable on creation
- id String
- Unique identifier model OCID of a model that is immutable on creation
- id string
- Unique identifier model OCID of a model that is immutable on creation
- id str
- Unique identifier model OCID of a model that is immutable on creation
- id String
- Unique identifier model OCID of a model that is immutable on creation
getModel Result
The following output properties are available:
- CompartmentId string
- The OCID for the model's compartment.
- Dictionary<string, string>
- Defined tags for this resource. Each key is predefined and scoped to a namespace. Example: {"foo-namespace.bar-key": "value"}
- Description string
- A short description of the Model.
- DisplayName string
- A user-friendly display name for the resource. It does not have to be unique and can be modified. Avoid entering confidential information.
- EvaluationResults List<GetModel Evaluation Result> 
- model training results of different models
- Dictionary<string, string>
- Simple key-value pair that is applied without any predefined name, type or scope. Exists for cross-compatibility only. Example: {"bar-key": "value"}
- Id string
- Unique identifier model OCID of a model that is immutable on creation
- LifecycleDetails string
- A message describing the current state in more detail. For example, can be used to provide actionable information for a resource in failed state.
- ModelDetails List<GetModel Model Detail> 
- Possible model types
- ProjectId string
- The OCID of the project to associate with the model.
- State string
- The state of the model.
- Dictionary<string, string>
- Usage of system tag keys. These predefined keys are scoped to namespaces. Example: {"orcl-cloud.free-tier-retained": "true"}
- TestStrategies List<GetModel Test Strategy> 
- Possible strategy as testing and validation(optional) dataset.
- TimeCreated string
- The time the the model was created. An RFC3339 formatted datetime string.
- TimeUpdated string
- The time the model was updated. An RFC3339 formatted datetime string.
- TrainingDatasets List<GetModel Training Dataset> 
- Possible data set type
- Version string
- For pre trained models this will identify model type version used for model creation For custom identifying the model by model id is difficult. This param provides ease of use for end customer. <>::<>_<>::<> ex: ai-lang::NER_V1::CUSTOM-V0
- CompartmentId string
- The OCID for the model's compartment.
- map[string]string
- Defined tags for this resource. Each key is predefined and scoped to a namespace. Example: {"foo-namespace.bar-key": "value"}
- Description string
- A short description of the Model.
- DisplayName string
- A user-friendly display name for the resource. It does not have to be unique and can be modified. Avoid entering confidential information.
- EvaluationResults []GetModel Evaluation Result 
- model training results of different models
- map[string]string
- Simple key-value pair that is applied without any predefined name, type or scope. Exists for cross-compatibility only. Example: {"bar-key": "value"}
- Id string
- Unique identifier model OCID of a model that is immutable on creation
- LifecycleDetails string
- A message describing the current state in more detail. For example, can be used to provide actionable information for a resource in failed state.
- ModelDetails []GetModel Model Detail 
- Possible model types
- ProjectId string
- The OCID of the project to associate with the model.
- State string
- The state of the model.
- map[string]string
- Usage of system tag keys. These predefined keys are scoped to namespaces. Example: {"orcl-cloud.free-tier-retained": "true"}
- TestStrategies []GetModel Test Strategy 
- Possible strategy as testing and validation(optional) dataset.
- TimeCreated string
- The time the the model was created. An RFC3339 formatted datetime string.
- TimeUpdated string
- The time the model was updated. An RFC3339 formatted datetime string.
- TrainingDatasets []GetModel Training Dataset 
- Possible data set type
- Version string
- For pre trained models this will identify model type version used for model creation For custom identifying the model by model id is difficult. This param provides ease of use for end customer. <>::<>_<>::<> ex: ai-lang::NER_V1::CUSTOM-V0
- compartmentId String
- The OCID for the model's compartment.
- Map<String,String>
- Defined tags for this resource. Each key is predefined and scoped to a namespace. Example: {"foo-namespace.bar-key": "value"}
- description String
- A short description of the Model.
- displayName String
- A user-friendly display name for the resource. It does not have to be unique and can be modified. Avoid entering confidential information.
- evaluationResults List<GetModel Evaluation Result> 
- model training results of different models
- Map<String,String>
- Simple key-value pair that is applied without any predefined name, type or scope. Exists for cross-compatibility only. Example: {"bar-key": "value"}
- id String
- Unique identifier model OCID of a model that is immutable on creation
- lifecycleDetails String
- A message describing the current state in more detail. For example, can be used to provide actionable information for a resource in failed state.
- modelDetails List<GetModel Model Detail> 
- Possible model types
- projectId String
- The OCID of the project to associate with the model.
- state String
- The state of the model.
- Map<String,String>
- Usage of system tag keys. These predefined keys are scoped to namespaces. Example: {"orcl-cloud.free-tier-retained": "true"}
- testStrategies List<GetModel Test Strategy> 
- Possible strategy as testing and validation(optional) dataset.
- timeCreated String
- The time the the model was created. An RFC3339 formatted datetime string.
- timeUpdated String
- The time the model was updated. An RFC3339 formatted datetime string.
- trainingDatasets List<GetModel Training Dataset> 
- Possible data set type
- version String
- For pre trained models this will identify model type version used for model creation For custom identifying the model by model id is difficult. This param provides ease of use for end customer. <>::<>_<>::<> ex: ai-lang::NER_V1::CUSTOM-V0
- compartmentId string
- The OCID for the model's compartment.
- {[key: string]: string}
- Defined tags for this resource. Each key is predefined and scoped to a namespace. Example: {"foo-namespace.bar-key": "value"}
- description string
- A short description of the Model.
- displayName string
- A user-friendly display name for the resource. It does not have to be unique and can be modified. Avoid entering confidential information.
- evaluationResults GetModel Evaluation Result[] 
- model training results of different models
- {[key: string]: string}
- Simple key-value pair that is applied without any predefined name, type or scope. Exists for cross-compatibility only. Example: {"bar-key": "value"}
- id string
- Unique identifier model OCID of a model that is immutable on creation
- lifecycleDetails string
- A message describing the current state in more detail. For example, can be used to provide actionable information for a resource in failed state.
- modelDetails GetModel Model Detail[] 
- Possible model types
- projectId string
- The OCID of the project to associate with the model.
- state string
- The state of the model.
- {[key: string]: string}
- Usage of system tag keys. These predefined keys are scoped to namespaces. Example: {"orcl-cloud.free-tier-retained": "true"}
- testStrategies GetModel Test Strategy[] 
- Possible strategy as testing and validation(optional) dataset.
- timeCreated string
- The time the the model was created. An RFC3339 formatted datetime string.
- timeUpdated string
- The time the model was updated. An RFC3339 formatted datetime string.
- trainingDatasets GetModel Training Dataset[] 
- Possible data set type
- version string
- For pre trained models this will identify model type version used for model creation For custom identifying the model by model id is difficult. This param provides ease of use for end customer. <>::<>_<>::<> ex: ai-lang::NER_V1::CUSTOM-V0
- compartment_id str
- The OCID for the model's compartment.
- Mapping[str, str]
- Defined tags for this resource. Each key is predefined and scoped to a namespace. Example: {"foo-namespace.bar-key": "value"}
- description str
- A short description of the Model.
- display_name str
- A user-friendly display name for the resource. It does not have to be unique and can be modified. Avoid entering confidential information.
- evaluation_results Sequence[GetModel Evaluation Result] 
- model training results of different models
- Mapping[str, str]
- Simple key-value pair that is applied without any predefined name, type or scope. Exists for cross-compatibility only. Example: {"bar-key": "value"}
- id str
- Unique identifier model OCID of a model that is immutable on creation
- lifecycle_details str
- A message describing the current state in more detail. For example, can be used to provide actionable information for a resource in failed state.
- model_details Sequence[GetModel Model Detail] 
- Possible model types
- project_id str
- The OCID of the project to associate with the model.
- state str
- The state of the model.
- Mapping[str, str]
- Usage of system tag keys. These predefined keys are scoped to namespaces. Example: {"orcl-cloud.free-tier-retained": "true"}
- test_strategies Sequence[GetModel Test Strategy] 
- Possible strategy as testing and validation(optional) dataset.
- time_created str
- The time the the model was created. An RFC3339 formatted datetime string.
- time_updated str
- The time the model was updated. An RFC3339 formatted datetime string.
- training_datasets Sequence[GetModel Training Dataset] 
- Possible data set type
- version str
- For pre trained models this will identify model type version used for model creation For custom identifying the model by model id is difficult. This param provides ease of use for end customer. <>::<>_<>::<> ex: ai-lang::NER_V1::CUSTOM-V0
- compartmentId String
- The OCID for the model's compartment.
- Map<String>
- Defined tags for this resource. Each key is predefined and scoped to a namespace. Example: {"foo-namespace.bar-key": "value"}
- description String
- A short description of the Model.
- displayName String
- A user-friendly display name for the resource. It does not have to be unique and can be modified. Avoid entering confidential information.
- evaluationResults List<Property Map>
- model training results of different models
- Map<String>
- Simple key-value pair that is applied without any predefined name, type or scope. Exists for cross-compatibility only. Example: {"bar-key": "value"}
- id String
- Unique identifier model OCID of a model that is immutable on creation
- lifecycleDetails String
- A message describing the current state in more detail. For example, can be used to provide actionable information for a resource in failed state.
- modelDetails List<Property Map>
- Possible model types
- projectId String
- The OCID of the project to associate with the model.
- state String
- The state of the model.
- Map<String>
- Usage of system tag keys. These predefined keys are scoped to namespaces. Example: {"orcl-cloud.free-tier-retained": "true"}
- testStrategies List<Property Map>
- Possible strategy as testing and validation(optional) dataset.
- timeCreated String
- The time the the model was created. An RFC3339 formatted datetime string.
- timeUpdated String
- The time the model was updated. An RFC3339 formatted datetime string.
- trainingDatasets List<Property Map>
- Possible data set type
- version String
- For pre trained models this will identify model type version used for model creation For custom identifying the model by model id is difficult. This param provides ease of use for end customer. <>::<>_<>::<> ex: ai-lang::NER_V1::CUSTOM-V0
Supporting Types
GetModelEvaluationResult   
- ClassMetrics List<GetModel Evaluation Result Class Metric> 
- List of text classification metrics
- ConfusionMatrix string
- class level confusion matrix
- EntityMetrics List<GetModel Evaluation Result Entity Metric> 
- List of entity metrics
- Labels List<string>
- labels
- Metrics
List<GetModel Evaluation Result Metric> 
- Model level named entity recognition metrics
- ModelType string
- Model type
- ClassMetrics []GetModel Evaluation Result Class Metric 
- List of text classification metrics
- ConfusionMatrix string
- class level confusion matrix
- EntityMetrics []GetModel Evaluation Result Entity Metric 
- List of entity metrics
- Labels []string
- labels
- Metrics
[]GetModel Evaluation Result Metric 
- Model level named entity recognition metrics
- ModelType string
- Model type
- classMetrics List<GetModel Evaluation Result Class Metric> 
- List of text classification metrics
- confusionMatrix String
- class level confusion matrix
- entityMetrics List<GetModel Evaluation Result Entity Metric> 
- List of entity metrics
- labels List<String>
- labels
- metrics
List<GetModel Evaluation Result Metric> 
- Model level named entity recognition metrics
- modelType String
- Model type
- classMetrics GetModel Evaluation Result Class Metric[] 
- List of text classification metrics
- confusionMatrix string
- class level confusion matrix
- entityMetrics GetModel Evaluation Result Entity Metric[] 
- List of entity metrics
- labels string[]
- labels
- metrics
GetModel Evaluation Result Metric[] 
- Model level named entity recognition metrics
- modelType string
- Model type
- class_metrics Sequence[GetModel Evaluation Result Class Metric] 
- List of text classification metrics
- confusion_matrix str
- class level confusion matrix
- entity_metrics Sequence[GetModel Evaluation Result Entity Metric] 
- List of entity metrics
- labels Sequence[str]
- labels
- metrics
Sequence[GetModel Evaluation Result Metric] 
- Model level named entity recognition metrics
- model_type str
- Model type
- classMetrics List<Property Map>
- List of text classification metrics
- confusionMatrix String
- class level confusion matrix
- entityMetrics List<Property Map>
- List of entity metrics
- labels List<String>
- labels
- metrics List<Property Map>
- Model level named entity recognition metrics
- modelType String
- Model type
GetModelEvaluationResultClassMetric     
- F1 double
- F1-score, is a measure of a model’s accuracy on a dataset
- Label string
- Entity label
- Precision double
- Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
- Recall double
- Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
- Support double
- number of samples in the test set
- F1 float64
- F1-score, is a measure of a model’s accuracy on a dataset
- Label string
- Entity label
- Precision float64
- Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
- Recall float64
- Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
- Support float64
- number of samples in the test set
- f1 Double
- F1-score, is a measure of a model’s accuracy on a dataset
- label String
- Entity label
- precision Double
- Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
- recall Double
- Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
- support Double
- number of samples in the test set
- f1 number
- F1-score, is a measure of a model’s accuracy on a dataset
- label string
- Entity label
- precision number
- Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
- recall number
- Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
- support number
- number of samples in the test set
- f1 float
- F1-score, is a measure of a model’s accuracy on a dataset
- label str
- Entity label
- precision float
- Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
- recall float
- Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
- support float
- number of samples in the test set
- f1 Number
- F1-score, is a measure of a model’s accuracy on a dataset
- label String
- Entity label
- precision Number
- Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
- recall Number
- Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
- support Number
- number of samples in the test set
GetModelEvaluationResultEntityMetric     
- F1 double
- F1-score, is a measure of a model’s accuracy on a dataset
- Label string
- Entity label
- Precision double
- Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
- Recall double
- Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
- F1 float64
- F1-score, is a measure of a model’s accuracy on a dataset
- Label string
- Entity label
- Precision float64
- Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
- Recall float64
- Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
- f1 Double
- F1-score, is a measure of a model’s accuracy on a dataset
- label String
- Entity label
- precision Double
- Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
- recall Double
- Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
- f1 number
- F1-score, is a measure of a model’s accuracy on a dataset
- label string
- Entity label
- precision number
- Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
- recall number
- Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
- f1 float
- F1-score, is a measure of a model’s accuracy on a dataset
- label str
- Entity label
- precision float
- Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
- recall float
- Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
- f1 Number
- F1-score, is a measure of a model’s accuracy on a dataset
- label String
- Entity label
- precision Number
- Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
- recall Number
- Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
GetModelEvaluationResultMetric    
- Accuracy double
- The fraction of the labels that were correctly recognised .
- MacroF1 double
- F1-score, is a measure of a model’s accuracy on a dataset
- MacroPrecision double
- Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
- MacroRecall double
- Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
- MicroF1 double
- F1-score, is a measure of a model’s accuracy on a dataset
- MicroPrecision double
- Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
- MicroRecall double
- Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
- WeightedF1 double
- F1-score, is a measure of a model’s accuracy on a dataset
- WeightedPrecision double
- Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
- WeightedRecall double
- Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
- Accuracy float64
- The fraction of the labels that were correctly recognised .
- MacroF1 float64
- F1-score, is a measure of a model’s accuracy on a dataset
- MacroPrecision float64
- Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
- MacroRecall float64
- Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
- MicroF1 float64
- F1-score, is a measure of a model’s accuracy on a dataset
- MicroPrecision float64
- Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
- MicroRecall float64
- Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
- WeightedF1 float64
- F1-score, is a measure of a model’s accuracy on a dataset
- WeightedPrecision float64
- Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
- WeightedRecall float64
- Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
- accuracy Double
- The fraction of the labels that were correctly recognised .
- macroF1 Double
- F1-score, is a measure of a model’s accuracy on a dataset
- macroPrecision Double
- Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
- macroRecall Double
- Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
- microF1 Double
- F1-score, is a measure of a model’s accuracy on a dataset
- microPrecision Double
- Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
- microRecall Double
- Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
- weightedF1 Double
- F1-score, is a measure of a model’s accuracy on a dataset
- weightedPrecision Double
- Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
- weightedRecall Double
- Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
- accuracy number
- The fraction of the labels that were correctly recognised .
- macroF1 number
- F1-score, is a measure of a model’s accuracy on a dataset
- macroPrecision number
- Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
- macroRecall number
- Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
- microF1 number
- F1-score, is a measure of a model’s accuracy on a dataset
- microPrecision number
- Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
- microRecall number
- Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
- weightedF1 number
- F1-score, is a measure of a model’s accuracy on a dataset
- weightedPrecision number
- Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
- weightedRecall number
- Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
- accuracy float
- The fraction of the labels that were correctly recognised .
- macro_f1 float
- F1-score, is a measure of a model’s accuracy on a dataset
- macro_precision float
- Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
- macro_recall float
- Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
- micro_f1 float
- F1-score, is a measure of a model’s accuracy on a dataset
- micro_precision float
- Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
- micro_recall float
- Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
- weighted_f1 float
- F1-score, is a measure of a model’s accuracy on a dataset
- weighted_precision float
- Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
- weighted_recall float
- Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
- accuracy Number
- The fraction of the labels that were correctly recognised .
- macroF1 Number
- F1-score, is a measure of a model’s accuracy on a dataset
- macroPrecision Number
- Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
- macroRecall Number
- Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
- microF1 Number
- F1-score, is a measure of a model’s accuracy on a dataset
- microPrecision Number
- Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
- microRecall Number
- Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
- weightedF1 Number
- F1-score, is a measure of a model’s accuracy on a dataset
- weightedPrecision Number
- Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
- weightedRecall Number
- Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
GetModelModelDetail   
- ClassificationModes List<GetModel Model Detail Classification Mode> 
- classification Modes
- LanguageCode string
- supported language default value is en
- ModelType string
- Model type
- Version string
- For pre trained models this will identify model type version used for model creation For custom identifying the model by model id is difficult. This param provides ease of use for end customer. <>::<>_<>::<> ex: ai-lang::NER_V1::CUSTOM-V0
- ClassificationModes []GetModel Model Detail Classification Mode 
- classification Modes
- LanguageCode string
- supported language default value is en
- ModelType string
- Model type
- Version string
- For pre trained models this will identify model type version used for model creation For custom identifying the model by model id is difficult. This param provides ease of use for end customer. <>::<>_<>::<> ex: ai-lang::NER_V1::CUSTOM-V0
- classificationModes List<GetModel Model Detail Classification Mode> 
- classification Modes
- languageCode String
- supported language default value is en
- modelType String
- Model type
- version String
- For pre trained models this will identify model type version used for model creation For custom identifying the model by model id is difficult. This param provides ease of use for end customer. <>::<>_<>::<> ex: ai-lang::NER_V1::CUSTOM-V0
- classificationModes GetModel Model Detail Classification Mode[] 
- classification Modes
- languageCode string
- supported language default value is en
- modelType string
- Model type
- version string
- For pre trained models this will identify model type version used for model creation For custom identifying the model by model id is difficult. This param provides ease of use for end customer. <>::<>_<>::<> ex: ai-lang::NER_V1::CUSTOM-V0
- classification_modes Sequence[GetModel Model Detail Classification Mode] 
- classification Modes
- language_code str
- supported language default value is en
- model_type str
- Model type
- version str
- For pre trained models this will identify model type version used for model creation For custom identifying the model by model id is difficult. This param provides ease of use for end customer. <>::<>_<>::<> ex: ai-lang::NER_V1::CUSTOM-V0
- classificationModes List<Property Map>
- classification Modes
- languageCode String
- supported language default value is en
- modelType String
- Model type
- version String
- For pre trained models this will identify model type version used for model creation For custom identifying the model by model id is difficult. This param provides ease of use for end customer. <>::<>_<>::<> ex: ai-lang::NER_V1::CUSTOM-V0
GetModelModelDetailClassificationMode     
- ClassificationMode string
- classification Modes
- Version string
- For pre trained models this will identify model type version used for model creation For custom identifying the model by model id is difficult. This param provides ease of use for end customer. <>::<>_<>::<> ex: ai-lang::NER_V1::CUSTOM-V0
- ClassificationMode string
- classification Modes
- Version string
- For pre trained models this will identify model type version used for model creation For custom identifying the model by model id is difficult. This param provides ease of use for end customer. <>::<>_<>::<> ex: ai-lang::NER_V1::CUSTOM-V0
- classificationMode String
- classification Modes
- version String
- For pre trained models this will identify model type version used for model creation For custom identifying the model by model id is difficult. This param provides ease of use for end customer. <>::<>_<>::<> ex: ai-lang::NER_V1::CUSTOM-V0
- classificationMode string
- classification Modes
- version string
- For pre trained models this will identify model type version used for model creation For custom identifying the model by model id is difficult. This param provides ease of use for end customer. <>::<>_<>::<> ex: ai-lang::NER_V1::CUSTOM-V0
- classification_mode str
- classification Modes
- version str
- For pre trained models this will identify model type version used for model creation For custom identifying the model by model id is difficult. This param provides ease of use for end customer. <>::<>_<>::<> ex: ai-lang::NER_V1::CUSTOM-V0
- classificationMode String
- classification Modes
- version String
- For pre trained models this will identify model type version used for model creation For custom identifying the model by model id is difficult. This param provides ease of use for end customer. <>::<>_<>::<> ex: ai-lang::NER_V1::CUSTOM-V0
GetModelTestStrategy   
- StrategyType string
- This information will define the test strategy different datasets for test and validation(optional) dataset.
- TestingDatasets List<GetModel Test Strategy Testing Dataset> 
- Possible data set type
- ValidationDatasets List<GetModel Test Strategy Validation Dataset> 
- Possible data set type
- StrategyType string
- This information will define the test strategy different datasets for test and validation(optional) dataset.
- TestingDatasets []GetModel Test Strategy Testing Dataset 
- Possible data set type
- ValidationDatasets []GetModel Test Strategy Validation Dataset 
- Possible data set type
- strategyType String
- This information will define the test strategy different datasets for test and validation(optional) dataset.
- testingDatasets List<GetModel Test Strategy Testing Dataset> 
- Possible data set type
- validationDatasets List<GetModel Test Strategy Validation Dataset> 
- Possible data set type
- strategyType string
- This information will define the test strategy different datasets for test and validation(optional) dataset.
- testingDatasets GetModel Test Strategy Testing Dataset[] 
- Possible data set type
- validationDatasets GetModel Test Strategy Validation Dataset[] 
- Possible data set type
- strategy_type str
- This information will define the test strategy different datasets for test and validation(optional) dataset.
- testing_datasets Sequence[GetModel Test Strategy Testing Dataset] 
- Possible data set type
- validation_datasets Sequence[GetModel Test Strategy Validation Dataset] 
- Possible data set type
- strategyType String
- This information will define the test strategy different datasets for test and validation(optional) dataset.
- testingDatasets List<Property Map>
- Possible data set type
- validationDatasets List<Property Map>
- Possible data set type
GetModelTestStrategyTestingDataset     
- DatasetId string
- Data Science Labelling Service OCID
- DatasetType string
- Possible data sets
- LocationDetails List<GetModel Test Strategy Testing Dataset Location Detail> 
- Possible object storage location types
- DatasetId string
- Data Science Labelling Service OCID
- DatasetType string
- Possible data sets
- LocationDetails []GetModel Test Strategy Testing Dataset Location Detail 
- Possible object storage location types
- datasetId String
- Data Science Labelling Service OCID
- datasetType String
- Possible data sets
- locationDetails List<GetModel Test Strategy Testing Dataset Location Detail> 
- Possible object storage location types
- datasetId string
- Data Science Labelling Service OCID
- datasetType string
- Possible data sets
- locationDetails GetModel Test Strategy Testing Dataset Location Detail[] 
- Possible object storage location types
- dataset_id str
- Data Science Labelling Service OCID
- dataset_type str
- Possible data sets
- location_details Sequence[GetModel Test Strategy Testing Dataset Location Detail] 
- Possible object storage location types
- datasetId String
- Data Science Labelling Service OCID
- datasetType String
- Possible data sets
- locationDetails List<Property Map>
- Possible object storage location types
GetModelTestStrategyTestingDatasetLocationDetail       
- Bucket string
- Object storage bucket name
- LocationType string
- Possible object storage location types
- Namespace string
- Object storage namespace
- ObjectNames List<string>
- Array of files which need to be processed in the bucket
- Bucket string
- Object storage bucket name
- LocationType string
- Possible object storage location types
- Namespace string
- Object storage namespace
- ObjectNames []string
- Array of files which need to be processed in the bucket
- bucket String
- Object storage bucket name
- locationType String
- Possible object storage location types
- namespace String
- Object storage namespace
- objectNames List<String>
- Array of files which need to be processed in the bucket
- bucket string
- Object storage bucket name
- locationType string
- Possible object storage location types
- namespace string
- Object storage namespace
- objectNames string[]
- Array of files which need to be processed in the bucket
- bucket str
- Object storage bucket name
- location_type str
- Possible object storage location types
- namespace str
- Object storage namespace
- object_names Sequence[str]
- Array of files which need to be processed in the bucket
- bucket String
- Object storage bucket name
- locationType String
- Possible object storage location types
- namespace String
- Object storage namespace
- objectNames List<String>
- Array of files which need to be processed in the bucket
GetModelTestStrategyValidationDataset     
- DatasetId string
- Data Science Labelling Service OCID
- DatasetType string
- Possible data sets
- LocationDetails List<GetModel Test Strategy Validation Dataset Location Detail> 
- Possible object storage location types
- DatasetId string
- Data Science Labelling Service OCID
- DatasetType string
- Possible data sets
- LocationDetails []GetModel Test Strategy Validation Dataset Location Detail 
- Possible object storage location types
- datasetId String
- Data Science Labelling Service OCID
- datasetType String
- Possible data sets
- locationDetails List<GetModel Test Strategy Validation Dataset Location Detail> 
- Possible object storage location types
- datasetId string
- Data Science Labelling Service OCID
- datasetType string
- Possible data sets
- locationDetails GetModel Test Strategy Validation Dataset Location Detail[] 
- Possible object storage location types
- dataset_id str
- Data Science Labelling Service OCID
- dataset_type str
- Possible data sets
- location_details Sequence[GetModel Test Strategy Validation Dataset Location Detail] 
- Possible object storage location types
- datasetId String
- Data Science Labelling Service OCID
- datasetType String
- Possible data sets
- locationDetails List<Property Map>
- Possible object storage location types
GetModelTestStrategyValidationDatasetLocationDetail       
- Bucket string
- Object storage bucket name
- LocationType string
- Possible object storage location types
- Namespace string
- Object storage namespace
- ObjectNames List<string>
- Array of files which need to be processed in the bucket
- Bucket string
- Object storage bucket name
- LocationType string
- Possible object storage location types
- Namespace string
- Object storage namespace
- ObjectNames []string
- Array of files which need to be processed in the bucket
- bucket String
- Object storage bucket name
- locationType String
- Possible object storage location types
- namespace String
- Object storage namespace
- objectNames List<String>
- Array of files which need to be processed in the bucket
- bucket string
- Object storage bucket name
- locationType string
- Possible object storage location types
- namespace string
- Object storage namespace
- objectNames string[]
- Array of files which need to be processed in the bucket
- bucket str
- Object storage bucket name
- location_type str
- Possible object storage location types
- namespace str
- Object storage namespace
- object_names Sequence[str]
- Array of files which need to be processed in the bucket
- bucket String
- Object storage bucket name
- locationType String
- Possible object storage location types
- namespace String
- Object storage namespace
- objectNames List<String>
- Array of files which need to be processed in the bucket
GetModelTrainingDataset   
- DatasetId string
- Data Science Labelling Service OCID
- DatasetType string
- Possible data sets
- LocationDetails List<GetModel Training Dataset Location Detail> 
- Possible object storage location types
- DatasetId string
- Data Science Labelling Service OCID
- DatasetType string
- Possible data sets
- LocationDetails []GetModel Training Dataset Location Detail 
- Possible object storage location types
- datasetId String
- Data Science Labelling Service OCID
- datasetType String
- Possible data sets
- locationDetails List<GetModel Training Dataset Location Detail> 
- Possible object storage location types
- datasetId string
- Data Science Labelling Service OCID
- datasetType string
- Possible data sets
- locationDetails GetModel Training Dataset Location Detail[] 
- Possible object storage location types
- dataset_id str
- Data Science Labelling Service OCID
- dataset_type str
- Possible data sets
- location_details Sequence[GetModel Training Dataset Location Detail] 
- Possible object storage location types
- datasetId String
- Data Science Labelling Service OCID
- datasetType String
- Possible data sets
- locationDetails List<Property Map>
- Possible object storage location types
GetModelTrainingDatasetLocationDetail     
- Bucket string
- Object storage bucket name
- LocationType string
- Possible object storage location types
- Namespace string
- Object storage namespace
- ObjectNames List<string>
- Array of files which need to be processed in the bucket
- Bucket string
- Object storage bucket name
- LocationType string
- Possible object storage location types
- Namespace string
- Object storage namespace
- ObjectNames []string
- Array of files which need to be processed in the bucket
- bucket String
- Object storage bucket name
- locationType String
- Possible object storage location types
- namespace String
- Object storage namespace
- objectNames List<String>
- Array of files which need to be processed in the bucket
- bucket string
- Object storage bucket name
- locationType string
- Possible object storage location types
- namespace string
- Object storage namespace
- objectNames string[]
- Array of files which need to be processed in the bucket
- bucket str
- Object storage bucket name
- location_type str
- Possible object storage location types
- namespace str
- Object storage namespace
- object_names Sequence[str]
- Array of files which need to be processed in the bucket
- bucket String
- Object storage bucket name
- locationType String
- Possible object storage location types
- namespace String
- Object storage namespace
- objectNames List<String>
- Array of files which need to be processed in the bucket
Package Details
- Repository
- oci pulumi/pulumi-oci
- License
- Apache-2.0
- Notes
- This Pulumi package is based on the ociTerraform Provider.