Conformance Test Suite Chart (egeria-cts)¶
This is a deployment of Egeria that will automatically run the Conformance Test Suite's repository workbench against a deployed metadata repository.
Prerequisites
In order to use the chart, you'll first need to have the following installed:
- A Kubernetes cluster at 1.15 or above
- the
kubectl
tool in your path - Helm 3.0 or above
No configuration of the chart is required to use defaults, but information is provided below.
Installation¶
Install (deploy) the CTS chart
helm repo add egeria https://odpi.github.io/egeria-charts
helm repo update
helm install [-f overrides.yaml] <name> egeria/egeria-cts
The -f overrides.yaml
is optional, and only necessary if you are overriding any of the configuration (see options below), while the <name>
is the name you want to give your deployment.
The installation may take a minute or so
This is because it is not only creating the required objects in Kubernetes to run the platforms, but also is configuring Egeria itself - which involves waiting for everything to startup before configuring Egeria via REST API calls.
Configuration¶
In a helm chart the configuration that has been externalized by the chart writer is specified in the values.yaml
file which you can find in this directory. However, rather than edit this file directly, it's recommended you create an additional file with the required overrides (for example, called overrides.yaml
).
The primary values you will likely want to override in this chart are as follows:
Technology under test¶
The technology under test ("tut") defines the repository that you want to run the CTS against. By default, the chart will run the CTS against the built-in graph repository. To configure it to test some other repository, you will need to override one or more of the following:
Parameter | Description |
---|---|
tut.serverType |
Defines the type of the repository to be tested: native for a built-in repository of Egeria core (like the in-memory repository), plugin for a pluggable native repository (like XTDB), or proxy for a third party repository technology (like Apache Atlas). |
tut.connectorProvider |
Should be the canonical class name of the connector provider class for the repository connector to use. |
tut.connectorConfig |
An optional structure defining any additional configuration for the connector. This should be provided as YAML, and will be automatically translated into JSON to pass through to the configurationProperties parameter of the connector when configuring it. |
tut.serverEndpoint.host |
When using a third party technology that relies on integrating to a system outside Egeria (i.e. when using a proxy ), this is the hostname of the third party technology. |
tut.serverEndpoint.port |
When using a third party technology that relies on integrating to a system outside Egeria (i.e. when using a proxy ), this is the port number of the third party technology. |
tut.serverEndpoint.protocol |
When using a third party technology that relies on integrating to a system outside Egeria (i.e. when using a proxy ), this is the protocol to use when accessing the third party technology (e.g. https ). |
tut.serverEndpoint.username |
When using a third party technology that relies on integrating to a system outside Egeria (i.e. when using a proxy ), this is the username of the system user to use when accessing the third party technology. |
tut.serverEndpoint.password |
When using a third party technology that relies on integrating to a system outside Egeria (i.e. when using a proxy ), this is the password for the system user to use when accessing the third party technology. |
Example tut override for XTDB
tut:
serverType: "plugin"
connectorProvider: "org.odpi.egeria.connectors.juxt.xtdb.repositoryconnector.XtdbOMRSRepositoryConnectorProvider"
connectorConfig:
xtdbConfig:
xtdb/index-store:
kv-store:
xtdb/module: xtdb.lmdb/->kv-store
db-dir: data/servers/xtdb/lmdb-index
xtdb/document-store:
kv-store:
xtdb/module: xtdb.rocksdb/->kv-store
db-dir: data/servers/xtdb/rdb-docs
xtdb/tx-log:
kv-store:
xtdb/module: xtdb.rocksdb/->kv-store
db-dir: data/servers/xtdb/rdb-tx
xtdb.lucene/lucene-store:
db-dir: data/servers/xtdb/lucene
Connector downloads¶
In addition to the general technology under test configuration outlined above, when configuring the chart to test a non-core Egeria connector you also need to specify any dependencies that need to be downloaded to run that connector (i.e. at a minimum to make that connector's connector provider and connector itself available to the pod running the technology under test).
You do this by overriding the downloads
value with a list of filename
and url
pairs.
Example downloads override for XTDB - for latest RELEASE of connector
downloads:
- filename: egeria-connector-xtdb-LATEST_RELEASE-jar-with-dependencies.jar
url: "http://repository.sonatype.org/service/local/artifact/maven/redirect?r=central-proxy&g=org.odpi.egeria&a=egeria-connector-xtdb&v=RELEASE&c=jar-with-dependencies"
Example downloads override for XTDB - for latest SNAPSHOT of connector
Only use this to use development level code for the connector. You will need to explicitly specify the version, in this case 3.9-SNAPSHOT.
```yaml downloads: - filename: egeria-connector-xtdb-LATEST_SNAPSHOT-jar-with-dependencies.jar url: "http://oss.sonatype.org/service/local/artifact/maven/redirect?r=snapshots&g=org.odpi.egeria&a=egeria-connector-xtdb&v=3.9-SNAPSHOT&c=jar-with-dependencies"
Why both filename and URL?
As illustrated in the examples above, some URLs may dynamically redirect and resolve to a given filename, for example to always download the latest release of a given file. Because we have minimal utilities installed in the pod to actually dynamically do these downloads, there are cases (like the example above) where the utilities are not able to automatically determine the filename to produce from such dynamic URLs.
Therefore, by specifying both the URL and the filename we can ensure that the file is downloaded and stored as an expected output (like a .jar
file) to ensure it can be automatically resolved by e.g. Java's class loader.
Scale of test¶
There is a single option to configure the scale of the CTS test:
records
: defines the "scale factor" for the CTS, where the number of instances that will be created (per type) is roughly 2x this number, and the maximum number of search results for the paged search tests will be limited to this number
Monitoring progress¶
You can monitor the progress of the CTS execution by looking at the log output from the initialization and reporting pods:
Get the init and report pod names
shell
kubectl get pods -l 'app.kubernetes.io/component in (init,report)'
Example output for retrieving the init
pod name
NAME READY STATUS RESTARTS AGE
p320-10-init-5845c9bb79-5ddwh 1/1 Running 0 45h
t12-init-585b47f74-85r8m 1/1 Running 0 39m
If you had multiple CTS deployments running in parallel in your cluster, this may return more than one result (as in the example above). The one that starts with the name you used as the name of your deployment is the one that represents your particular deployment.
You will also see reporting pods once the tests start - and after finishing the init pod will go away.
Once you have the pod name, you can then view the log:
Review the init
pod logs
kubectl logs -f <podname>
Where the <podname>
is the name of the pod discovered in the command above (e.g. t12-init-585b47f74-85r8m
).
Example output from the init
log
This opening section simply displays environment variables that have been configured, primarily useful for debugging or other diagnostics purposes:
-- Environment variables --
...
CONNECTOR_PROVIDER=org.odpi.egeria.connectors.juxt.xtdb.repositoryconnector.XtdbOMRSRepositoryConnectorProvider
...
-- End of Environment variables --
The configuration then occurs. Of primary importance here is that all the results are 200
to indicate each operation was successful:
-- Configuring platform with required servers...
> Configuring conformance test suite driver:
{"class":"VoidResponse","relatedHTTPCode":200}
(200 - https://t12-platform:9443/open-metadata/admin-services/users/admin/servers/cts/server-url-root?url=https://t12-platform:9443)
{"class":"VoidResponse","relatedHTTPCode":200}
(200 - https://t12-platform:9443/open-metadata/admin-services/users/admin/servers/cts/server-type?typeName=Conformance)
{"class":"VoidResponse","relatedHTTPCode":200}
(200 - https://t12-platform:9443/open-metadata/admin-services/users/admin/servers/cts/event-bus?topicURLRoot=egeria)
{"class":"VoidResponse","relatedHTTPCode":200}
(200 - https://t12-platform:9443/open-metadata/admin-services/users/admin/servers/cts/cohorts/cts)
{"class":"VoidResponse","relatedHTTPCode":200}
(200 - https://t12-platform:9443/open-metadata/admin-services/users/admin/servers/cts/conformance-suite-workbenches/repository-workbench/repositories)
> Configuring technology under test:
{"class":"VoidResponse","relatedHTTPCode":200}
(200 - https://t12-platform:9443/open-metadata/admin-services/users/admin/servers/tut/server-url-root?url=https://t12-platform:9443)
{"class":"VoidResponse","relatedHTTPCode":200}
(200 - https://t12-platform:9443/open-metadata/admin-services/users/admin/servers/tut/server-type?typeName=TUT)
{"class":"VoidResponse","relatedHTTPCode":200}
(200 - https://t12-platform:9443/open-metadata/admin-services/users/admin/servers/tut/organization-name?name=Egeria)
{"class":"VoidResponse","relatedHTTPCode":200}
(200 - https://t12-platform:9443/open-metadata/admin-services/users/admin/servers/tut/event-bus?topicURLRoot=egeria)
{"class":"VoidResponse","relatedHTTPCode":200}
(200 - https://t12-platform:9443/open-metadata/admin-services/users/admin/servers/tut/local-repository/mode/plugin-repository/connection)
{"class":"VoidResponse","relatedHTTPCode":200}
(200 - https://t12-platform:9443/open-metadata/admin-services/users/admin/servers/tut/cohorts/cts)
-- End of configuration
The CTS itself is then started up, followed by the technology under test:
-- Running the conformance test suite...
> Starting conformance test suite:
{"class":"SuccessMessageResponse","relatedHTTPCode":200,"successMessage":"Fri Oct 22 14:43:54 GMT 2021 cts is running the following services: [Open Metadata Repository Services (OMRS), Connected Asset Services, Conformance Suite Services]"}
(200 - https://t12-platform:9443/open-metadata/admin-services/users/admin/servers/cts/instance)
> Starting the technology under test:
{"class":"SuccessMessageResponse","relatedHTTPCode":200,"successMessage":"Fri Oct 22 14:44:12 GMT 2021 tut is running the following services: [Open Metadata Repository Services (OMRS)]"}
(200 - https://t12-platform:9443/open-metadata/admin-services/users/admin/servers/tut/instance)
-- End of conformance test suite startup
Once the CTS is running, a busy loop is then run to wait until the CTS has completed. The status of the CTS execution will be re-checked every 30 seconds and an update printed out to the log accordingly:
-- Collecting results of the conformance test suite...
> Collecting basic configuration information...
> Waiting for the conformance test suite to complete...
... still waiting (0d:00h:01m:00s)
... still waiting (0d:00h:01m:30s)
...
Once completed, the pod will retrieve the detailed results from the CTS itself:
...
... still waiting (0d:02h:13m:30s)
> Retrieving detailed profile results...
... retrieving profile details for: Metadata sharing
... retrieving profile details for: Reference copies
... retrieving profile details for: Metadata maintenance
... retrieving profile details for: Dynamic types
... retrieving profile details for: Graph queries
... retrieving profile details for: Historical search
... retrieving profile details for: Entity proxies
... retrieving profile details for: Soft-delete and restore
... retrieving profile details for: Undo an update
... retrieving profile details for: Reidentify instance
... retrieving profile details for: Retype instance
... retrieving profile details for: Rehome instance
... retrieving profile details for: Entity search
... retrieving profile details for: Relationship search
... retrieving profile details for: Entity advanced search
... retrieving profile details for: Relationship advanced search
> Retrieving detailed test case results...
... retrieving test case details for: repository-attribute-typedef-ActivityType
...
Once all the results are collected, the pod will then bundle these all up into an archive and then print out the location of that archive within the pod:
...
... retrieving test case details for: repository-typedef-ZoneHierarchy-null
> Bundling all results into an archive...
-- End of conformance test suite results collection, download from: /tmp/t12.tar.gz
The report pod will continue running after the CTS has completed
Note that the report pod itself will continue running after the CTS has completed: this is to provide adequate time to copy the bundled archive file of the results out of the pod. (If the pod were allowed to stop its execution any files within it would be lost.)
Retrieving results¶
Once the CTS has completed running and the bundled archive file is available, it can be copied from the report pod:
Copy results archive from the pod
kubectl exec cts-report--1-d8p6k -- sh -c 'cat /export/pipe' | tar -xvf -
Where the <podname>
is the name of the pod as discovered above (e.g. t12-report-585b47f74-85r8m
) and the <name>
is the name of the Helm deployment. (Alternatively, you could just copy the location from the log output as shown above in the monitoring section.)
This will create a local file ie: ```shell $ kubectl exec cts-report--1-d8p6k -- sh -c 'cat /export/pipe' | tar -xvf - Defaulted container "wait-for-retrieval" out of: wait-for-retrieval, wait-for-platform (init), wait-for-kafka (init), wait-for-init (init), report (init) x export/cts.tar.gztar.gz
Uninstallation¶
Once you have retrieved the results, or if you want to otherwise cancel or stop the running of the CTS:
Delete the deployment
helm delete <name>
Where <name>
is the name of your deployment. (If you are unsure what name you used, helm list
lists all the deployments.)
Raise an issue or comment below