Repository Services Section¶
Configuring the audit log
Configure the audit log¶
Egeria's audit log provides a configurable set of destinations for audit records and other diagnostic logging for an OMAG Server. Some destinations also support a query interface to allow an administrator to understand how the server is running.
Each audit log record has a severity that can be used to route it to one or more specific destinations. Therefore, when an audit log destination is configured, it is optionally supplied with a list of severities to filter the types of audit log records it should receive.
The audit log severities are as follows:
Severity | Description |
---|---|
Information |
The server is providing information about its normal operation. |
Event |
An event was received from another member of the open metadata repository cohort. |
Decision |
A decision has been made related to the interaction of the local metadata repository and the rest of the cohort. |
Action |
An Action is required by the administrator. At a minimum, the situation needs to be investigated and if necessary, corrective action taken. |
Error |
An error occurred, possibly caused by an incompatibility between the local metadata repository and one of the remote repositories. The local repository may restrict some of the metadata interchange functions as a result. |
Exception |
An unexpected exception occurred. This means that the server needs some administration attention to correct configuration or fix a logic error because it is not operating as a proper peer in the open metadata repository cohort. |
Security |
Unauthorized access to a service or metadata instance has been attempted. |
Startup |
A new component is starting up. |
Shutdown |
An existing component is shutting down. |
Asset |
An auditable action relating to an asset has been taken. |
Types |
Activity is occurring that relates to the open metadata types in use by this server. |
Cohort |
The server is exchanging registration information about an open metadata repository cohort that it is connecting to. |
Trace |
This is additional information on the operation of the server that may be of assistance in debugging a problem. It is not normally logged to any destination, but can be added when needed. |
PerfMon |
This log record contains performance monitoring timing information for specific types of processing. It is not normally logged to any destination, but can be added when needed. |
<Unknown> |
Uninitialized Severity |
The default audit log destination is the console audit log destination. This writes selected parts of each audit log record to "standard out" (stdout).
It is configured to receive log records of all severities except Activity
, Event
, Trace
and PerfMon
. It is added automatically to a server's configuration document when other sections are configured.
Add audit log destinations¶
If the server is a development or test server, then the default audit log configuration is probably sufficient, and you should use the following command:
POST - set default audit log destination
{{platformURLRoot}}/open-metadata/admin-services/users/{{adminUserId}}/servers/{{serverName}}/audit-log-destinations/default
Note: Using this command overrides all previous audit log destinations configured for the server.
If this server is a production server then you will probably want to set up the audit log destinations explicitly. You can add multiple destinations and each one can be set up to receive different severities of audit log records.
There are various destinations that can be configured for the audit log:
Since the default audit log destination is also a console audit log destination, only use this option to add the Trace
and PerfMon
severities.
POST - add console audit log destination
{{platformURLRoot}}/open-metadata/admin-services/users/{{adminUserId}}/servers/{{serverName}}/audit-log-destinations/console
The body of the request should be a list of severities
If an empty list is passed as the request body then all severities are supported by the destination.
This destination writes JSON files in a shared directory. One file for each audit log record.
POST - add JSON file-based audit log destination
{{platformURLRoot}}/open-metadata/admin-services/users/{{adminUserId}}/servers/{{serverName}}/audit-log-destinations/files
The body of the request should be a list of severities
If an empty list is passed as the request body then all severities are supported by the destination.
This destination writes each log record as an event on the supplied event topic. It assumes that the event bus is set up first.
POST - add event-based audit log destination
{{platformURLRoot}}/open-metadata/admin-services/users/{{adminUserId}}/servers/{{serverName}}/audit-log-destinations/event-topic
The body of the request should be a list of severities
If an empty list is passed as the request body then all severities are supported by the destination.
This writes full log records to the slf4j ecosystem. When configuring slf4j as destination you also need to specify audit log logger category via the application properties of the OMAG Server Platform. This is described in Connecting the OMAG Audit Log Framework section of the developer logging guide.
The configuration of the slf4j ecosystem determines it ultimate destination(s).
POST - add slf4j audit log destination
{{platformURLRoot}}/open-metadata/admin-services/users/{{adminUserId}}/servers/{{serverName}}/audit-log-destinations/slf4j
The body of the request should be a list of severities
If an empty list is passed as the request body then all severities are supported by the destination.
This sets up an audit log destination that is described though a connection. In this case, the connection is passed in the request body and the supported severities are supplied in the connection's configuration properties.
POST - add connection-based audit log destination
{{platformURLRoot}}/open-metadata/admin-services/users/{{adminUserId}}/servers/{{serverName}}/audit-log-destinations/connection
It is also possible to set up all the audit log destinations in one command as a list of connections. Using this option overrides all previous audit log destinations and so can be used as the update command. The list of connections is passed in the request body and the supported severities are supplied in each connection's configuration properties.
POST - add a list of connection-based audit log destinations
{{platformURLRoot}}/open-metadata/admin-services/users/{{adminUserId}}/servers/{{serverName}}/audit-log-destinations
Retrieving audit log destinations¶
The configured list of audit log destinations can be retrieved using this command:
GET - the list of configured audit log destinations
{{platformURLRoot}}/open-metadata/admin-services/users/{{adminUserId}}/servers/{{serverName}}/audit-log-destinations
Updating audit log destinations¶
Audit log destinations can be updated individually, by qualified name using the following command:
POST - update connection-based audit log destination
{{platformURLRoot}}/open-metadata/admin-services/users/{{adminUserId}}/servers/{{serverName}}/audit-log-destinations/connection/{{qualifiedName}}
If you are not sure what the audit log connection is called, retrieve the list of configured audit log connections and the resulting list of audit log connections will include the qualified names.
Remove audit log destinations¶
The following will remove all audit log destinations, enabling you to add a new set of audit log destinations.
DELETE - clear all audit log destinations
{{platformURLRoot}}/open-metadata/admin-services/users/{{adminUserId}}/servers/{{serverName}}/audit-log-destinations
It is also possible to remove a single audit log destination using its connection's qualified name.
DELETE - clear then named audit log destination
{{platformURLRoot}}/open-metadata/admin-services/users/{{adminUserId}}/servers/{{serverName}}/audit-log-destinations/{{qualifiedName}}
Configuring the local repository store
Configure the native repository connector¶
A Metadata Access Store supports a metadata repository that has native support for the open metadata types and instances. This is enabled by adding a native metadata repository connector to the server's configuration document.
Add a repository connector implementation¶
Egeria provides a number of implementations of such a repository -- only one of these options can be configured for a given metadata server at a time.
This command enables a XTDB-based metadata repository, which itself has a number of pluggable back-end options for persistence and other configuration options.
This native metadata repository is currently the highest-performing, most fully-functional repository for Egeria, supporting all metadata operations including historical metadata as well as being highly-available through clustered deployment.
Enable the bi-temporal graph repository (XTDB)
This in memory version of the XTDB repository is designed for testing.
POST {{platformURLRoot}}/open-metadata/admin-services/users/{{adminUserId}}/servers/{{serverName}}/local-repository/mode/xtdb-in-memory-repository
This command sets up XTDB with RocksDB KV store to provide a local high performance historical metadata repository. It only supports one instance of the server ans so can not be used in a horizontal scale-out HA deployment
POST {{platformURLRoot}}/open-metadata/admin-services/users/{{adminUserId}}/servers/{{serverName}}/local-repository/mode/xtdb-local-kv-repository
This command allows you to specify different XTDB back ends so it can be run in a HA context with multiple versions of the same server deployed to the same repository.
POST {{platformURLRoot}}/open-metadata/admin-services/users/{{adminUserId}}/servers/{{serverName}}/local-repository/mode/xtdb-local-repository
??? info "For Egeria releases before version 5.0 ... The XTDB connector is located in its own git repository egeria-connector-xtdb.git. The JAR file needs to be built from this repository and placed the OMAG Server Platform's class path. It is configured in the Metadata Access Store using the follwoing command:
POST {{platformURLRoot}}/open-metadata/admin-services/users/{{adminUserId}}/servers/{{serverName}}/local-repository/mode/plugin-repository/connection
```json
{
"class": "Connection",
"connectorType": {
"class": "ConnectorType",
"connectorProviderClassName": "org.odpi.egeria.connectors.juxt.xtdb.repositoryconnector.XtdbOMRSRepositoryConnectorProvider"
}
}
```
May require additional driver libraries
Note that depending on the persistence you configure, you may need to obtain additional driver libraries for your back-end service, as not every driver is embedded in the XTDB connector itself.
This command enables a JanusGraph-based native metadata repository that is embedded in the metadata server. This repository does not maintain historical versions of metadata.
Enable the JanusGraph repository
POST {{platformURLRoot}}/open-metadata/admin-services/users/{{adminUserId}}/servers/{{serverName}}/local-repository/mode/local-graph-repository
If no request body is used, metadata will be stored on the local disk. It is possible to pass a set of storage properties to JanusGraph to enalbe it to use a different persistance service. However, the repository uses local transactions and is does not support multiple instances of the same server/repository to be active at one time.
The in-memory native repository maintains an in-memory store of metadata. It is useful for demos and testing. No metadata is kept if the Metadata Access Server is shutdown. It should not be used in a production environment.
Enable the in-memory repository
POST {{platformURLRoot}}/open-metadata/admin-services/users/{{adminUserId}}/servers/{{serverName}}/local-repository/mode/in-memory-repository
The read-only native repository connector provides a compliant implementation of a local repository that can be configured into a Metadata Access Store. It does not support the interfaces for create, update, delete. However, it does support the search interfaces and is able to cache metadata. This means it can be loaded with metadata from an open metadata archive and connected to a cohort. The content from the archive will be shared with other members of the cohort.
POST - enable the read-only repository
{{platformURLRoot}}/open-metadata/admin-services/users/{{adminUserId}}/servers/{{serverName}}/local-repository/mode/read-only-repository
Remove the native repository connector¶
This command removes all configuration for the local repository. This includes the local metadata collection id. If a new local repository is added, it will have a new local metadata collection id and will not be able to automatically re-register with its cohort(s).
Remove the local repository
DELETE {{platformURLRoot}}/open-metadata/admin-services/users/{{adminUserId}}/servers/{{serverName}}/local-repository
Registering the server with a cohort
Configuring registration to an Open Metadata Repository Cohort¶
An OMAG Server that is capable of being a Cohort Member can register with one or more open metadata repository cohorts.
Each cohort has a memorable name - eg cocoCohort
. This name needs to be used in the configuration of each member. At the heart of a cohort are 1-4 cohort topics. These are topics on an event bus that the members use to exchange information.
There is a choice of topic structure for the cohort.
- A single topic is used for all types of events
- Three topics are used, each dedicated to a specific type of cohort event:
- Registration events that exchange information about the members of the cohort.
- Type verification events that ensure consistency of the open metadata types used by the members of the cohort.
- Instance events that enable members of the cohort to share metadata elements.
The use of a single topic comes from the original implementation of Egeria. The use of the three dedicated topics was added later in version 2.11 to reduce the latency of cohort registration and to allow tuning of each topic's configuration. This is essential when multiple instances of an OMAG server are running in a cluster because the registration and type verification events need to be received by all server instances and the instance events need only to be received by one of the server instances.
Typically, all members of the cohort should be configured to use the same topic structure. However, if one of the members is back level and can only support the single topic then the other members can be set up to operate both topic structures. This is less efficient as these servers will process most instance events twice. However, it does provide a workaround until the back-level member can be upgraded.
The choices of topic structure are summarized in Figure 1.
Figure 1: Choices of cohort topic structures referred to as SINGLE_TOPIC, DEDICATED_TOPICS and BOTH_SINGLE_AND_DEDICATED_TOPICS reading left to right
Configuration commands¶
The commands for configuring a server as a member of a cohort are shown below. Before calling these commands, make sure that the default settings for the event bus are configured, and you know the name of the cohort and the topic structure it is using.
Add access to a cohort
The following command registers the server with a cohort using the default settings. This includes the default cohort topic structure, which is SINGLE_TOPIC before version 3.0 and DEDICATED_TOPICS for version 3.0 and above.
POST {platformURLRoot}/open-metadata/admin-services/users/{adminUserId}/servers/{serverName}/cohorts/{cohortName}
Alternatively it is possible to explicitly specify the cohort topic structure. The example below sets it to DEDICATED_TOPICS. The other options are SINGLE_TOPIC and BOTH_SINGLE_AND_DEDICATED_TOPICS.
POST {platformURLRoot}/open-metadata/admin-services/users/{adminUserId}/servers/{serverName}/cohorts/{cohortName}/topic-structure/DEDICATED_TOPICS
Both of these commands optionally support passing a map of name-value pairs in the request body. These properties are added to the additionalProperties
attribute of the Connection objects for each of the cohort topics. The additional properties supported are specific to the topic connector implementation. For example, see the Apache Kafka Topic Connector Documentation.
The result of the cohort configuration call fills out an entry in the cohort list of the server's configuration document. The fields in a cohort list entry are show in Figure 2.
Figure 2: Fields in an entry in a server's cohort list
It is possible to update any of these fields directly using the following command:
POST {platformURLRoot}/open-metadata/admin-services/users/{adminUserId}/servers/{serverName}/cohorts/{cohortName}/configuration
JSON structure for a member that is using DEDICATED_TOPICS
{
"class": "CohortConfig",
"cohortName": "cocoCohort",
"cohortRegistryConnection": {
"class": "Connection",
"headerVersion": 0,
"connectorType": {
"class": "ConnectorType",
"headerVersion": 0,
"type": {
"class": "ElementType",
"headerVersion": 0,
"elementOrigin": "LOCAL_COHORT",
"elementVersion": 0,
"elementTypeId": "954421eb-33a6-462d-a8ca-b5709a1bd0d4",
"elementTypeName": "ConnectorType",
"elementTypeVersion": 1,
"elementTypeDescription": "A set of properties describing a type of connector."
},
"guid": "108b85fe-d7a8-45c3-9f88-742ac4e4fd14",
"qualifiedName": "File Based Cohort Registry Store Connector",
"displayName": "File Based Cohort Registry Store Connector",
"description": "Connector supports storing of the open metadata cohort registry in a file.",
"connectorProviderClassName": "org.odpi.openmetadata.adapters.repositoryservices.cohortregistrystore.file.FileBasedRegistryStoreProvider"
},
"endpoint": {
"class": "Endpoint",
"headerVersion": 0,
"address": "./data/servers/cocoMDS4/cohorts/cocoCohort.registrystore"
}
},
"cohortOMRSRegistrationTopicConnection": {
"class": "VirtualConnection",
"headerVersion": 0,
"connectorType": {
"class": "ConnectorType",
"headerVersion": 0,
"connectorProviderClassName": "org.odpi.openmetadata.repositoryservices.connectors.omrstopic.OMRSTopicProvider"
},
"embeddedConnections": [
{
"class": "EmbeddedConnection",
"headerVersion": 0,
"position": 0,
"displayName": "cocoCohort OMRS Topic for registrations",
"embeddedConnection": {
"class": "Connection",
"headerVersion": 0,
"connectorType": {
"class": "ConnectorType",
"headerVersion": 0,
"type": {
"class": "ElementType",
"headerVersion": 0,
"elementOrigin": "LOCAL_COHORT",
"elementVersion": 0,
"elementTypeId": "954421eb-33a6-462d-a8ca-b5709a1bd0d4",
"elementTypeName": "ConnectorType",
"elementTypeVersion": 1,
"elementTypeDescription": "A set of properties describing a type of connector."
},
"guid": "3851e8d0-e343-400c-82cb-3918fed81da6",
"qualifiedName": "Kafka Open Metadata Topic Connector",
"displayName": "Kafka Open Metadata Topic Connector",
"description": "Kafka Open Metadata Topic Connector supports string based events over an Apache Kafka event bus.",
"connectorProviderClassName": "org.odpi.openmetadata.adapters.eventbus.topic.kafka.KafkaOpenMetadataTopicProvider",
"recognizedConfigurationProperties": [
"producer",
"consumer",
"local.server.id",
"sleepTime"
]
},
"endpoint": {
"class": "Endpoint",
"headerVersion": 0,
"address": "egeria.omag.openmetadata.repositoryservices.cohort.cocoCohort.OMRSTopic.registration"
},
"configurationProperties": {
"producer": {
"bootstrap.servers": "localhost:9092"
},
"local.server.id": "73955db6-026c-4ba5-a180-1355dbf166cf",
"consumer": {
"bootstrap.servers": "localhost:9092"
}
}
}
}
]
},
"cohortOMRSTypesTopicConnection": {
"class": "VirtualConnection",
"headerVersion": 0,
"connectorType": {
"class": "ConnectorType",
"headerVersion": 0,
"connectorProviderClassName": "org.odpi.openmetadata.repositoryservices.connectors.omrstopic.OMRSTopicProvider"
},
"embeddedConnections": [
{
"class": "EmbeddedConnection",
"headerVersion": 0,
"position": 0,
"displayName": "cocoCohort OMRS Topic for types",
"embeddedConnection": {
"class": "Connection",
"headerVersion": 0,
"connectorType": {
"class": "ConnectorType",
"headerVersion": 0,
"type": {
"class": "ElementType",
"headerVersion": 0,
"elementOrigin": "LOCAL_COHORT",
"elementVersion": 0,
"elementTypeId": "954421eb-33a6-462d-a8ca-b5709a1bd0d4",
"elementTypeName": "ConnectorType",
"elementTypeVersion": 1,
"elementTypeDescription": "A set of properties describing a type of connector."
},
"guid": "3851e8d0-e343-400c-82cb-3918fed81da6",
"qualifiedName": "Kafka Open Metadata Topic Connector",
"displayName": "Kafka Open Metadata Topic Connector",
"description": "Kafka Open Metadata Topic Connector supports string based events over an Apache Kafka event bus.",
"connectorProviderClassName": "org.odpi.openmetadata.adapters.eventbus.topic.kafka.KafkaOpenMetadataTopicProvider",
"recognizedConfigurationProperties": [
"producer",
"consumer",
"local.server.id",
"sleepTime"
]
},
"endpoint": {
"class": "Endpoint",
"headerVersion": 0,
"address": "egeria.omag.openmetadata.repositoryservices.cohort.cocoCohort.OMRSTopic.types"
},
"configurationProperties": {
"producer": {
"bootstrap.servers": "localhost:9092"
},
"local.server.id": "73955db6-026c-4ba5-a180-1355dbf166cf",
"consumer": {
"bootstrap.servers": "localhost:9092"
}
}
}
}
]
},
"cohortOMRSInstancesTopicConnection": {
"class": "VirtualConnection",
"headerVersion": 0,
"connectorType": {
"class": "ConnectorType",
"headerVersion": 0,
"connectorProviderClassName": "org.odpi.openmetadata.repositoryservices.connectors.omrstopic.OMRSTopicProvider"
},
"embeddedConnections": [
{
"class": "EmbeddedConnection",
"headerVersion": 0,
"position": 0,
"displayName": "cocoCohort OMRS Topic for instances",
"embeddedConnection": {
"class": "Connection",
"headerVersion": 0,
"connectorType": {
"class": "ConnectorType",
"headerVersion": 0,
"type": {
"class": "ElementType",
"headerVersion": 0,
"elementOrigin": "LOCAL_COHORT",
"elementVersion": 0,
"elementTypeId": "954421eb-33a6-462d-a8ca-b5709a1bd0d4",
"elementTypeName": "ConnectorType",
"elementTypeVersion": 1,
"elementTypeDescription": "A set of properties describing a type of connector."
},
"guid": "3851e8d0-e343-400c-82cb-3918fed81da6",
"qualifiedName": "Kafka Open Metadata Topic Connector",
"displayName": "Kafka Open Metadata Topic Connector",
"description": "Kafka Open Metadata Topic Connector supports string based events over an Apache Kafka event bus.",
"connectorProviderClassName": "org.odpi.openmetadata.adapters.eventbus.topic.kafka.KafkaOpenMetadataTopicProvider",
"recognizedConfigurationProperties": [
"producer",
"consumer",
"local.server.id",
"sleepTime"
]
},
"endpoint": {
"class": "Endpoint",
"headerVersion": 0,
"address": "egeria.omag.openmetadata.repositoryservices.cohort.cocoCohort.OMRSTopic.instances"
},
"configurationProperties": {
"producer": {
"bootstrap.servers": "localhost:9092"
},
"local.server.id": "73955db6-026c-4ba5-a180-1355dbf166cf",
"consumer": {
"bootstrap.servers": "localhost:9092"
}
}
}
}
]
},
"cohortOMRSTopicProtocolVersion": "V1",
"eventsToProcessRule": "ALL"
}
Controlling the name of the cohort topic(s)
Typically, a production deployment of an event bus requires the topics to be explicitly defined in its configuration. In addition, many organizations have naming standards for topics. Therefore, Egeria provides commands to query the topic names from the configuration for easy automation and the ability to override the topic names.
The default single topic name is egeria.omag.openmetadata.repositoryservices.cohort.{cohortName}.OMRSTopic
and the default dedicated topic names are:
- For registration events -
egeria.omag.openmetadata.repositoryservices.cohort.{cohortName}.OMRSTopic.registration
- For type verification events -
egeria.omag.openmetadata.repositoryservices.cohort.{cohortName}.OMRSTopic.types
- For instance events -
egeria.omag.openmetadata.repositoryservices.cohort.{cohortName}.OMRSTopic.instances
This is the command to query the single topic name.
GET {platformURLRoot}/open-metadata/admin-services/users/{adminUserId}/servers/{serverName}/cohorts/{cohortName}/topic-name
{
"class": "StringResponse",
"relatedHTTPCode": 200,
"resultString": "egeria.openmetadata.repositoryservices.cohort.cocoCohort.OMRSTopic"
}
{
"class": "StringResponse",
"relatedHTTPCode": 200
}
{
"class": "StringResponse",
"relatedHTTPCode": 400,
"exceptionClassName": "org.odpi.openmetadata.adminservices.ffdc.exception.OMAGInvalidParameterException",
"exceptionErrorMessage": "OMAG-ADMIN-400-033 The OMAG server cocoMDS1 is unable to override the cohort topic until the cocoCohortXXX cohort is set up",
"exceptionSystemAction": "No change has occurred in this server's configuration document.",
"exceptionUserAction": "Add the cohort configuration using the administration services and retry the request."
}
This is the command to retrieve the dedicated topics:
GET {platformURLRoot}/open-metadata/admin-services/users/{adminUserId}/servers/{serverName}/cohorts/{cohortName}/dedicated-topic-names
The result looks like this with the registration topic showing first, then the type verification topic and lastly the "instances topic":
{
"class": "DedicatedTopicListResponse",
"relatedHTTPCode": 200,
"dedicatedTopicList": {
"registrationTopicName": "egeria.omag.openmetadata.repositoryservices.cohort.cocoCohort.OMRSTopic.registration",
"typesTopicName": "egeria.omag.openmetadata.repositoryservices.cohort.cocoCohort.OMRSTopic.types",
"instancesTopicName": "egeria.omag.openmetadata.repositoryservices.cohort.cocoCohort.OMRSTopic.instances"
}
}
Override the value for the cohort topic
It is also possible to change the name of the topics used by a cohort. Any changes must be issued against each member of the cohort so that they are all connecting to the same cohort topic(s). The new value takes affect the next time the server is started.
Changing the single topic name is done with the following command
POST {platformURLRoot}/open-metadata/admin-services/users/{adminUserId}/servers/{serverName}/cohorts/{cohortName}/topic-name-override
{newTopicName}
The {newTopicName}
flows in the request body as raw text.
This is the command for changing the registration topic name:
POST {platformURLRoot}/open-metadata/admin-services/users/{adminUserId}/servers/{serverName}/cohorts/{cohortName}/topic-name-override/registration
{newTopicName}
This is the command for changing the type verification topic name:
POST {platformURLRoot}/open-metadata/admin-services/users/{adminUserId}/servers/{serverName}/cohorts/{cohortName}/topic-name-override/types
{newTopicName}
This is the command for changing the "instances topic" name:
POST {platformURLRoot}/open-metadata/admin-services/users/{adminUserId}/servers/{serverName}/cohorts/{cohortName}/topic-name-override/instances
{newTopicName}
Disconnect from a cohort
This command unregisters a server from a cohort.
DELETE {platformURLRoot}/open-metadata/admin-services/users/{adminUserId}/servers/{serverName}/cohorts/{cohortName}
Configuring the open metadata archives to load on server startup
Configure metadata to load on startup¶
Open metadata archives contain pre-canned metadata types and instances for cohort members.
Archives can be added to the configuration document of a server to ensure their content is loaded each time the server is started. This is intended for repositories that do not store the archive content but keep it in memory.
Archives can also be loaded to a running server.
Loading the same archive multiple times
If an archive is loaded multiple times, its content is only added to the local repository once - that is if the repository does not have the content already. No errors are recorded if the content is already in the repository.
Adding an archive to the configuration document¶
Typically, an open metadata archive is stored as JSON format in a file. To configure the load of such a file use the following command:
POST - specify file to load
POST {{platformURLRoot}}/open-metadata/admin-services/users/{{adminUserId}}/servers/{{serverName}}/open-metadata-archives/file
The body of the request should be the fully-qualified path name or path relative to the startup directory of the OMAG Server Platform -- and the file name should not have any quotes around it.
Alternatively it is possible to set up the list of open metadata archives as a list of connections. These connections refer to connectors that can read and retrieve the open metadata archive content.
POST - specify connection(s) to load
{{platformURLRoot}}/open-metadata/admin-services/users/{{adminUserId}}/servers/{{serverName}}/open-metadata-archives
The body of the request should be the list of connections from which to load archives.
This option can be used when the open metadata archives are not stored in a file, or a different file connector from the default one for the OMAG Server Platform is required.
Removing an archive from the configuration document on startup¶
Finally, this is how to remove the archives from the configuration document.
DELETE - remove archives from configuration document
{{platformURLRoot}}/open-metadata/admin-services/users/{{adminUserId}}/servers/{{serverName}}/open-metadata-archives
The body of the request should be the path to the metadata archive file.
Raise an issue or comment below