Running Locally¶
Development Environment Setup¶
You have two options for setting up your development environment:
- Use the Dev Container, either locally or via GitHub Codespaces. This is usually the fastest and easiest way to get started.
- Manual installation of the necessary tooling. This requires a basic understanding of administering Kubernetes and package management for your OS.
Initial Local Setup¶
Unless you're using GitHub Codespaces, the first step is cloning the Git repo into $GOPATH/src/github.com/argoproj/argo-workflows. Any other path will break the code generation.
Development Container¶
Prebuilt development container images are provided for both amd64 and arm64 containing all you need to develop Argo Workflows, without installing tools on your local machine. Provisioning a dev container is fully automated and typically takes ~1 minute.
You can use the development container in a few different ways:
- Visual Studio Code with Dev Containers extension. Open your
argo-workflowsfolder in VSCode and it should offer to use the development container automatically. VSCode will allow you to forward ports to allow your external browser to access the running components. devcontainerCLI. In yourargo-workflowsfolder, runmake devcontainer-up, which will automatically install the CLI and start the container. Then, usedevcontainer exec --workspace-folder . /bin/bashto get a shell where you can build the code. You can use any editor outside the container to edit code; any changes will be mirrored inside the container. Due to a limitation of the CLI, only port 8080 (the Web UI) will be exposed for you to access if you run this way. Other services are usable from the shell inside.- GitHub Codespaces. You can start editing as soon as VSCode is open, though you may want to wait for
pre-build.shto finish installing dependencies, building binaries, and setting up the cluster before running any commands in the terminal. Once you start running services (see next steps below), you can click on the "PORTS" tab in the VSCode terminal to see all forwarded ports. You can open the Web UI in a new tab from there.
Once you have entered the container, continue to Developing Locally.
The container runs k3d via docker-in-docker so you have a cluster to test against. To communicate with services running either in other development containers or directly on the local machine (e.g. a database), the following URL can be used in the workflow spec: host.docker.internal:<PORT>. This facilitates the implementation of workflows which need to connect to a database or an API server.
Note for Windows: configure .wslconfig to limit memory usage by the WSL2 to prevent VSCode OOM.
Manual Installation¶
To build on your own machine without using the Dev Container you will need:
- Go
- Yarn
- Docker
protocnodefor running the UI- A local Kubernetes cluster (
k3d,kind, orminikube) -
The following entries in your
/etc/hostsfile:127.0.0.1 dex 127.0.0.1 minio 127.0.0.1 postgres 127.0.0.1 mysql 127.0.0.1 azurite
We recommend using K3D to set up the local Kubernetes cluster since this will allow you to test RBAC set-up and is fast. You can set-up K3D to be part of your default kube config as follows:
k3d cluster start --wait
Alternatively, you can use Minikube to set up the local Kubernetes cluster.
Once a local Kubernetes cluster has started via minikube start, your kube config will use Minikube's context
automatically.
Warning
Do not use Docker Desktop's embedded Kubernetes, it does not support Kubernetes RBAC (i.e. kubectl auth can-i always returns allowed).
Developing locally¶
To start:
- The controller, so you can run workflows.
- MinIO (http://localhost:9000, use admin/password) so you can use artifacts.
Run:
make start
Make sure you don't see any errors in your terminal. This runs the Workflow Controller locally on your machine (not in Docker/Kubernetes).
You can submit a workflow for testing using kubectl:
kubectl create -f examples/hello-world.yaml
We recommend running make clean before make start to ensure recompilation.
If you made changes to the executor, you need to build the image:
make argoexec-image
You can use the TARGET_PLATFORM environment variable to compile images for specific platforms:
# compile for both arm64 and amd64
make argoexec-image TARGET_PLATFORM=linux/arm64,linux/amd64
Error expected 'package', found signal_darwin
You may see this error if symlinks are not configured for your git installation.
Run git config core.symlinks true to correct this.
To also start the API on http://localhost:2746:
make start API=true
This runs the Argo Server (in addition to the Workflow Controller) locally on your machine.
To also start the UI on http://localhost:8080 (UI=true implies API=true):
make start UI=true

If you are making change to the CLI (i.e. Argo Server), you can build it separately if you want:
make cli
./dist/argo submit examples/hello-world.yaml ;# new CLI is created as `./dist/argo`
Although, note that this will be built automatically if you do: make start API=true.
To test the workflow archive, use PROFILE=mysql or PROFILE=postgres:
make start PROFILE=mysql
You'll have, either:
- Postgres on http://localhost:5432, run
make postgres-clito access. - MySQL on http://localhost:3306, run
make mysql-clito access.
To back up the database, use make postgres-dump or make mysql-dump, which will generate a SQL dump in the db-dumps/ directory.
make postgres-dump
To restore the backup, use make postgres-cli or make mysql-cli, piping in the file from the db-dumps/ directory.
Note that this is destructive and will delete any data you have stored.
make postgres-cli < db-dumps/2024-10-16T17:11:58Z.sql
To test SSO integration, use PROFILE=sso:
make start UI=true PROFILE=sso
Proxying¶
When using UI=true, make start will start webpack-dev-server to serve requests to http://localhost:8080, while proxying API requests to the Argo Server at http://localhost:2746.
Use BASE_HREF to customize the base HREF, which will cause webpack-dev-server to strip out the provided path when proxying requests to the Argo Server.
For example, to make the UI accessible at http://localhost:8080/argo/:
make start UI=true BASE_HREF=/argo/
Note that if you're using PROFILE=sso, you may need to run kubectl rollout restart deploy dex to restart Dex after changing the base HREF.
TLS¶
By default, make start will start Argo in plain text mode.
To simulate a TLS proxy in front of Argo, use UI_SECURE=true (which implies UI=true):
make start UI_SECURE=true
To start Argo in encrypted mode, use SECURE=true, which can be optionally combined with UI_SECURE=true:
make start SECURE=true UI_SECURE=true
Running E2E tests locally¶
Start up Argo Workflows using the following:
make start PROFILE=mysql AUTH_MODE=client STATIC_FILES=false API=true
If you want to run Azure tests against a local Azurite:
kubectl -n $KUBE_NAMESPACE apply -f test/e2e/azure/deploy-azurite.yaml
make start
Running One Test¶
In most cases, you want to run the test that relates to your changes locally. You should not run all the tests suites. Our CI will run those concurrently when you create a PR, which will give you feedback much faster.
Find the test that you want to run in test/e2e
make TestArtifactServer
Running A Set Of Tests¶
You can find the build tag at the top of the test file.
//go:build api
You need to run make test-{buildTag}, so for api that would be:
make test-api
Diagnosing Test Failure¶
Tests often fail: that's good. To diagnose failure:
- Run
kubectl get pods, are pods in the state you expect? - Run
kubectl get wf, is your workflow in the state you expect? - What do the pod logs say? I.e.
kubectl logs. - Check the controller and argo-server logs. These are printed to the console you ran
make startin. Is anything logged atlevel=error?
If tests run slowly or time out, factory reset your Kubernetes cluster.
Database Tooling¶
The go run ./hack/db CLI provides a few useful commands for working with the DB locally:
$ go run ./hack/db
CLI for developers to use when working on the DB locally
Usage:
db [command]
Available Commands:
completion Generate the autocompletion script for the specified shell
fake-archived-workflows Insert randomly-generated workflows into argo_archived_workflows, for testing purposes
help Help about any command
migrate Force DB migration for given cluster/table
Flags:
-c, --dsn string DSN connection string. For MySQL, use 'mysql:password@tcp/argo'. (default "postgres://postgres@localhost:5432/postgres")
-h, --help help for db
Use "db [command] --help" for more information about a command.
Debugging using Visual Studio Code¶
When using the Dev Container with VSCode, use the Attach to argo server and/or Attach to workflow controller launch configurations to attach to the argo or workflow-controller processes, respectively.
This will allow you to start a debug session, where you can inspect variables and set breakpoints.
Committing¶
Before you commit code and raise a PR, always run:
make pre-commit -B
Please do the following when creating your PR:
- Sign-off your commits.
- Use Conventional Commit messages.
- Suffix the issue number.
Examples:
git commit --signoff -m 'fix: Fixed broken thing. Fixes #1234'
git commit --signoff -m 'feat: Added a new feature. Fixes #1234'
Creating Feature Descriptions¶
When adding a new feature, you must create a feature description file that will be used to generate new feature information when we do a feature release:
make feature-new
This will create a new feature description file in the .features directory which you must then edit to describe your feature.
By default, it uses your current branch name as the file name.
The name of the file doesn't get used by the tooling, it just needs to be unique to your feature so as not to collide on merge.
You can also specify a custom file name:
make feature-new FEATURE_FILENAME=my-awesome-feature
You must have an issue number to associate with your PR for features, and that must be placed in this file.
It seems reasonable that all new features are discussed in an issue before being developed.
There is a Component field which must match one of the fields in hack/featuregen/components.go
The feature file should be included in your PR to document your changes. Before submitting, you can validate your feature file:
make features-validate
The pre-commit target will also do that.
You can also preview how your feature will appear in the release notes:
make features-preview
This command runs a dry-run of the release notes generation process, showing you how your feature will appear in the markdown file that will be used to generate the release notes.
Troubleshooting¶
- When running
make pre-commit -B, if you encounter errors likemake: *** [pkg/apiclient/clusterworkflowtemplate/cluster-workflow-template.swagger.json] Error 1, ensure that you have checked out your code into$GOPATH/src/github.com/argoproj/argo-workflows. - If you encounter "out of heap" issues when building UI through Docker, please validate resources allocated to Docker. Compilation may fail if allocated RAM is less than 4Gi.
- To start profiling with
pprof, passARGO_PPROF=truewhen starting the controller locally. Then run the following:
go tool pprof http://localhost:6060/debug/pprof/profile # 30-second CPU profile
go tool pprof http://localhost:6060/debug/pprof/heap # heap profile
go tool pprof http://localhost:6060/debug/pprof/block # goroutine blocking profile