Performance Test Suite Chart (egeria-pts)¶
This is a deployment of Egeria that will automatically run the Performance Test Suite against a deployed metadata repository.
Prerequisites
In order to use the chart, you'll first need to have the following installed:
- A Kubernetes cluster at 1.15 or above
- the
kubectl
tool in your path - Helm 3.0 or above
No configuration of the chart is required to use defaults, but information is provided below.
Installation¶
Install (deploy) the PTS chart
helm repo add egeria https://odpi.github.io/egeria-charts
helm repo update
helm install [-f overrides.yaml] <name> egeria/egeria-pts
The -f overrides.yaml
is optional, and only necessary if you are overriding any of the configuration (see options below), while the <name>
is the name you want to give your deployment.
The installation may take a minute or so
This is because it is not only creating the required objects in Kubernetes to run the platforms, but also is configuring Egeria itself - which involves waiting for everything to startup before configuring Egeria via REST API calls.
Configuration¶
In a helm chart the configuration that has been externalized by the chart writer is specified in the values.yaml
file which you can find in this directory. However, rather than edit this file directly, it's recommended you create an additional file with the required overrides (for example, called overrides.yaml
).
The primary values you will likely want to override in this chart are as follows:
Technology under test¶
The technology under test ("tut") defines the repository that you want to run the PTS against. By default, the chart will run the PTS against the built-in graph repository. To configure it to test some other repository, you will need to override one or more of the following:
Parameter | Description |
---|---|
tut.serverType |
Defines the type of the repository to be tested: native for a built-in repository of Egeria core (like the in-memory repository), plugin for a pluggable native repository (like XTDB), or proxy for a third party repository technology (like Apache Atlas). |
tut.connectorProvider |
Should be the canonical class name of the connector provider class for the repository connector to use. |
tut.connectorConfig |
An optional structure defining any additional configuration for the connector. This should be provided as YAML, and will be automatically translated into JSON to pass through to the configurationProperties parameter of the connector when configuring it. |
tut.serverEndpoint.host |
When using a third party technology that relies on integrating to a system outside Egeria (i.e. when using a proxy ), this is the hostname of the third party technology. |
tut.serverEndpoint.port |
When using a third party technology that relies on integrating to a system outside Egeria (i.e. when using a proxy ), this is the port number of the third party technology. |
tut.serverEndpoint.protocol |
When using a third party technology that relies on integrating to a system outside Egeria (i.e. when using a proxy ), this is the protocol to use when accessing the third party technology (e.g. https ). |
tut.serverEndpoint.username |
When using a third party technology that relies on integrating to a system outside Egeria (i.e. when using a proxy ), this is the username of the system user to use when accessing the third party technology. |
tut.serverEndpoint.password |
When using a third party technology that relies on integrating to a system outside Egeria (i.e. when using a proxy ), this is the password for the system user to use when accessing the third party technology. |
Example tut
override for XTDB
tut:
serverType: "plugin"
connectorProvider: "org.odpi.egeria.connectors.juxt.xtdb.repositoryconnector.XtdbOMRSRepositoryConnectorProvider"
connectorConfig:
xtdbConfig:
xtdb/index-store:
kv-store:
xtdb/module: xtdb.lmdb/->kv-store
db-dir: data/servers/xtdb/lmdb-index
xtdb/document-store:
kv-store:
xtdb/module: xtdb.rocksdb/->kv-store
db-dir: data/servers/xtdb/rdb-docs
xtdb/tx-log:
kv-store:
xtdb/module: xtdb.rocksdb/->kv-store
db-dir: data/servers/xtdb/rdb-tx
xtdb.lucene/lucene-store:
db-dir: data/servers/xtdb/lucene
Connector downloads¶
In addition to the general technology under test configuration outlined above, when configuring the chart to test a non-core Egeria connector you also need to specify any dependencies that need to be downloaded to run that connector (i.e. at a minimum to make that connector's connector provider and connector itself available to the pod running the technology under test).
You do this by overriding the downloads
value with a list of filename
and url
pairs.
Example downloads
override for XTDB
downloads:
- filename: egeria-connector-xtdb-LATEST_RELEASE-jar-with-dependencies.jar
url: "http://repository.sonatype.org/service/local/artifact/maven/redirect?r=central-proxy&g=org.odpi.egeria&a=egeria-connector-xtdb&v=RELEASE&c=jar-with-dependencies"
Why both filename and URL?
As illustrated in the example above, some URLs may dynamically redirect and resolve to a given filename, for example to always download the latest release of a given file. Because we have minimal utilities installed in the pod to actually dynamically do these downloads, there are cases (like the example above) where the utilities are not able to automatically determine the filename to produce from such dynamic URLs.
Therefore, by specifying both the URL and the filename we can ensure that the file is downloaded and stored as an expected output (like a .jar
file) to ensure it can be automatically resolved by e.g. Java's class loader.
Scale of test¶
The volume aspects of the PTS can be configured using the following options:
Parameter | Description |
---|---|
instancesPerType |
The number of instances to create for each type supported by the repository. |
maxSearchResults |
The maximum number of search results to retrieve per page of results. |
waitBetweenScenarios |
How long to wait (in seconds) between test cases. This is primarily useful when testing asynchronous (eventually-consistent) repositories. |
profilesToSkip |
A list of the names of the profiles that should be skipped during the testing. This can be used to avoid testing profiles that are not of interest, or will cause the test to run for far longer than desired. |
methodsToSkip |
A list of the individual metadata collection methods that should be skipped during testing. This can be used to target specific methods that are not of interest, or will cause the test to run for far longer than desired. |
Monitoring progress¶
You can monitor the progress of the PTS execution by looking at the log output of the init-and-report
log:
Get the init-and-report
pod name
kubectl get pods -l app.kubernetes.io/component=init-and-report
Example output for retrieving the init-and-report
pod name
NAME READY STATUS RESTARTS AGE
p320-10-init-and-report-5845c9bb79-5ddwh 1/1 Running 0 45h
t12-init-and-report-585b47f74-85r8m 1/1 Running 0 39m
If you had multiple PTS deployments running in parallel in your cluster, this may return more than one result (as in the example above). The one that starts with the name you used as the name of your deployment is the one that represents your particular deployment.
Once you have the pod name, you can then view the log:
Review the init-and-report
pod logs
kubectl logs -f <podname>
Where the <podname>
is the name of the pod discovered in the command above (e.g. t12-init-and-report-585b47f74-85r8m
).
Example output from the init-and-report
log
This opening section simply displays environment variables that have been configured, primarily useful for debugging or other diagnostics purposes:
-- Environment variables --
...
CONNECTOR_PROVIDER=org.odpi.egeria.connectors.juxt.xtdb.repositoryconnector.XtdbOMRSRepositoryConnectorProvider
...
-- End of Environment variables --
The configuration then occurs. Of primary importance here is that all the results are 200
to indicate each operation was successful:
-- Configuring platform with required servers...
> Configuring performance test suite driver:
{"class":"VoidResponse","relatedHTTPCode":200}
(200 - https://t12-platform:9443/open-metadata/admin-services/users/admin/servers/pts/server-url-root?url=https://t12-platform:9443)
{"class":"VoidResponse","relatedHTTPCode":200}
(200 - https://t12-platform:9443/open-metadata/admin-services/users/admin/servers/pts/server-type?typeName=Conformance)
{"class":"VoidResponse","relatedHTTPCode":200}
(200 - https://t12-platform:9443/open-metadata/admin-services/users/admin/servers/pts/event-bus?topicURLRoot=egeria)
{"class":"VoidResponse","relatedHTTPCode":200}
(200 - https://t12-platform:9443/open-metadata/admin-services/users/admin/servers/pts/cohorts/pts)
{"class":"VoidResponse","relatedHTTPCode":200}
(200 - https://t12-platform:9443/open-metadata/admin-services/users/admin/servers/pts/conformance-suite-workbenches/repository-workbench/performance)
> Configuring technology under test:
{"class":"VoidResponse","relatedHTTPCode":200}
(200 - https://t12-platform:9443/open-metadata/admin-services/users/admin/servers/tut/server-url-root?url=https://t12-platform:9443)
{"class":"VoidResponse","relatedHTTPCode":200}
(200 - https://t12-platform:9443/open-metadata/admin-services/users/admin/servers/tut/server-type?typeName=TUT)
{"class":"VoidResponse","relatedHTTPCode":200}
(200 - https://t12-platform:9443/open-metadata/admin-services/users/admin/servers/tut/organization-name?name=Egeria)
{"class":"VoidResponse","relatedHTTPCode":200}
(200 - https://t12-platform:9443/open-metadata/admin-services/users/admin/servers/tut/event-bus?topicURLRoot=egeria)
{"class":"VoidResponse","relatedHTTPCode":200}
(200 - https://t12-platform:9443/open-metadata/admin-services/users/admin/servers/tut/local-repository/mode/plugin-repository/connection)
{"class":"VoidResponse","relatedHTTPCode":200}
(200 - https://t12-platform:9443/open-metadata/admin-services/users/admin/servers/tut/cohorts/pts)
-- End of configuration
The PTS itself is then started up, followed by the technology under test:
-- Running the performance test suite...
> Starting performance test suite:
{"class":"SuccessMessageResponse","relatedHTTPCode":200,"successMessage":"Fri Oct 22 14:43:54 GMT 2021 pts is running the following services: [Open Metadata Repository Services (OMRS), Connected Asset Services, Conformance Suite Services]"}
(200 - https://t12-platform:9443/open-metadata/admin-services/users/admin/servers/pts/instance)
> Starting the technology under test:
{"class":"SuccessMessageResponse","relatedHTTPCode":200,"successMessage":"Fri Oct 22 14:44:12 GMT 2021 tut is running the following services: [Open Metadata Repository Services (OMRS)]"}
(200 - https://t12-platform:9443/open-metadata/admin-services/users/admin/servers/tut/instance)
-- End of performance test suite startup
Once the PTS is running, a busy loop is then run to wait until the PTS has completed. The status of the PTS execution will be re-checked every 30 seconds and an update printed out to the log accordingly:
-- Collecting results of the performance test suite...
> Collecting basic configuration information...
> Waiting for the performance test suite to complete...
... still waiting (0d:00h:01m:00s)
... still waiting (0d:00h:01m:30s)
...
Once completed, the pod will retrieve the detailed results from the PTS itself:
...
... still waiting (0d:02h:13m:30s)
> Retrieving detailed profile results...
... retrieving profile details for: Metadata sharing
... retrieving profile details for: Reference copies
... retrieving profile details for: Metadata maintenance
... retrieving profile details for: Dynamic types
... retrieving profile details for: Graph queries
... retrieving profile details for: Historical search
... retrieving profile details for: Entity proxies
... retrieving profile details for: Soft-delete and restore
... retrieving profile details for: Undo an update
... retrieving profile details for: Reidentify instance
... retrieving profile details for: Retype instance
... retrieving profile details for: Rehome instance
... retrieving profile details for: Entity search
... retrieving profile details for: Relationship search
... retrieving profile details for: Entity advanced search
... retrieving profile details for: Relationship advanced search
> Retrieving detailed test case results...
... retrieving test case details for: repository-attribute-typedef-ActivityType
...
Once all the results are collected, the pod will then bundle these all up into an archive and then print out the location of that archive within the pod:
...
... retrieving test case details for: repository-typedef-ZoneHierarchy-null
> Bundling all results into an archive...
-- End of performance test suite results collection, download from: /tmp/t12.tar.gz
The pod will continue running after the PTS has completed
Note that the pod itself will continue running after the PTS has completed: this is to provide adequate time to copy the bundled archive file of the results out of the pod. (If the pod were allowed to stop its execution any files within it would be lost.)
Retrieving results¶
Once the PTS has completed running and the bundled archive file is available, it can be copied from the pod:
Copy results archive from the pod
kubectl cp <podname>:/tmp/<name>.tar.gz <filename>
Where the <podname>
is the name of the pod as discovered above (e.g. t12-init-and-report-585b47f74-85r8m
) and the <name>
is the name of the Helm deployment. (Alternatively, you could just copy the location from the log output as shown above in the monitoring section.)
<filename>
is the location on your local filesystem to which you want to copy the archive.
Uninstallation¶
Once you have retrieved the results, or if you want to otherwise cancel or stop the running of the PTS:
Delete the deployment
helm delete <name>
Where <name>
is the name of your deployment. (If you are unsure what name you used, helm list
lists all the deployments.)
Raise an issue or comment below