UI Dashboard Customisation

Hi Everyone, I have cloned all the repos from Harvester GitHub and made some changes in the dashboard repo (Login page, dashboard page, logout page) as per the document, after making the build using the harvester-installer repo the changes are not reflected.

Name Repo Address
Harvester harvester github repo
Harvester Dashboard harvester dashboard github repo
Harvester Installer harvester installer github repo

Setting up Env Variables

 $ export LOCAL_HARVESTER_SRC=/home/to/local/harvester/repo
 $ export LOCAL_ADDONS_SRC=/path/to/local/addons/repo
 $ make

Dashboard UI Text Changes:

file location: dashboard/shell/assets/translations/en-us.yaml

Is there a GitHub issue on this?

thanks for the reply Robert :slightly_smiling_face:

No, I have made those customizations locally and make the build as per the document.

The main confusion is if we make the build everything is cloned from git and docker hub even if
i set the environment variable in bash

export LOCAL_HARVESTER_SRC=/home/to/local/harvester/repo
export LOCAL_ADDONS_SRC=/path/to/local/addons/repo

@robertsirc if there is any document or steps, on how the local dashboard UI build integrates with the main harvester build that would be great.

There’s a bunch of detail of the build process that is not yet well documented, and I realise now that my recent change to harvester-installer to add those LOCAL_HARVESTER_SRC and LOCAL_ADDONS_SRC environment variables may actually be a source of confusion. There’s a bit more information at https://github.com/harvester/harvester/wiki/Build-ISO-images, but this still really doesn’t provide enough detail for newcomers.

To your question about including changes to the dashboard in a custom ISO build, there is currently no easy way to do this. There is a hard way, which I’ve experimented with a bit today, which I will explain further down.

I should mention that if your goal is to make changes to the dashboard then submit them to the harvester project, and you just want to test your changes (as opposed to deploying them in production), you can do that from a checkout of the dashboard repo, against an existing harvester cluster, by running RANCHER_ENV=harvester API=https://your-harvester-ip yarn dev. I assume if you’re hacking on the dashboard code, you already found that mentioned in the README :wink:

Now, to the build process… In broad strokes, it looks like this:

  • The harvester repo is the core of harvester itself. It contains the code required to build the harvester, harvester-upgrade and harvester-webhook container images, along with a harvester chart and tarballs of dependent charts. When you run make here, it will be build those three containers. By default, they will be named rancher/harvester, rancher/harvester-upgrade and rancher/harvester-webhook. When we release Harvester (or the latest master branch is built by CI), those images are pushed to dockerhub, and so are available as docker.io/rancher/harvester, docker.io/rancher/harvester-upgrade and docker.io/rancher/harvester-webhook.
  • The harvester-installer repo contains the installer code itself, i.e the console app that runs to deploy harvester, and also contains the scripts we use to build harvester ISO images, along with some other bits and pieces. Running make in here will build an ISO.
    • To build the ISO, harvester-installer needs access to the harvester repo (so it can get the chart definitions and some metadata like the harvester version), and the addons repo so it can get the yaml that defines various harvester addons. By default, when building an ISO, these repos will be cloned automatically, but you can override this with the LOCAL_*_SRC environment variables to use local checkouts.
    • All the container images included in the ISO are pulled from dockerhub. That includes the harvester container image built from the harvester repo (which, in turn, embeds the dashboard). The harvester-installer will not, and cannot build these pieces, regardless of whether you have those LOCAL_*_SRC environment variables set.
  • If you’re not actually doing work on the harvester-installer code itself, you can build an ISO directly from the harvester repo by running make build-iso. This will internally clone the harvester-installer repo to build the ISO, but remember, it’ still pulling all those container images from dockerhub.

So, how to build an ISO that includes your local changes to harvester code? As sketched out in the wiki page mentioned earlier, you need to get the container images built from the harvester repo pushed to a registry, so that harvester-installer can pull them from there when building the ISO. If you’re using dockerhub, that’s:

DOCKERHUB_USERNAME="replace with your dockerhub username"
docker login
cp ~/.docker . -r
export REPO=${DOCKERHUB_USERNAME}
export PUSH=true
make
make-iso

You can run all the above from the harvester repo - no need to mess with harvester-installer unless you’re also making changes to the code there. If you are working with both repos, then you first run make from the harvester repo with the REPO environment variable set to your dockerhub username and PUSH=true to get the container images built and pushed. Then you go run make from the harvester-installer repo (still with REPO set), but also with LOCAL_HARVESTER_SRC set appropriately.

Now, finally, to the dashboard code. This is built from the dashboard repo. No part of the harvester build described above will build the dashboard tarball for you. Rather, it’s included automatically in the harvester container by downloading tarballs from releases.rancher.com (see harvester/package/Dockerfile at 8b80f821f9da22a678ecef018a01d0fe241715a4 · harvester/harvester · GitHub). This is hard-coded and cannot currently be overridden via environment variables. If you look at that code, there are three things being downloaded:

We can ignore api-ui here, the interesting bits are harvester-ui/dashboard and harvester-ui/plugin. The former is the dashboard that you see when you log in to harvester itself. The latter is a plugin version used when accessing harvester from rancher. Both are built from the dashboard repo. You can build the first tarball by running ./scripts/build-embedded - it will land in the dist subdirectory. I believe the plugin tarball can be built by running ./shell/scripts/build-pkg.sh harvester true and it will land in the disk-pkg subdirectory (I haven’t looked at that bit in detail yet).

If you want to build your own modified dashboard and include it in an ISO, you have to build the dashboard tarballs as just described, then put them somewhere they can be accessed during the build process, and edit harvester’s package/Dockerfile to get them from there. I did a little test here, just of the main dashboard, not of the plugin bit. In my case, I was working out of my “wip-add-virtualSize” branch of the dashboard, so when I ran ./scripts/build-embedded, it gave me dist/wip-add-virtualSize.tar.gz. I then copied that to a local web server and updated the harvester source as follows (192.186.4.194 is the internal IP address of one of my systems, so that will obviously be different if you try this):

diff --git a/package/Dockerfile b/package/Dockerfile
index eaa102ff..035a8cc7 100644
--- a/package/Dockerfile
+++ b/package/Dockerfile
@@ -30,7 +30,7 @@ RUN curl -sLf ${!TINI_URL} > /usr/bin/tini && chmod +x /usr/bin/tini
 
 RUN mkdir -p /usr/share/harvester/harvester && \
     cd /usr/share/harvester/harvester && \
-    curl -sL https://releases.rancher.com/harvester-ui/dashboard/${HARVESTER_UI_VERSION}.tar.gz | tar xvzf - --strip-components=2 && \
+    curl -sL http://192.168.4.194/scratch/wip-add-virtualSize.tar.gz | tar xvzf - --strip-components=2 && \
     mkdir -p /usr/share/harvester/harvester/api-ui && \
     cd /usr/share/harvester/harvester/api-ui && \
     curl -sL https://releases.rancher.com/api-ui/${HARVESTER_API_UI_VERSION}.tar.gz | tar xvzf - --strip-components=1 && \

After doing that I was able to run the following from the harvester repo (no need to use harvester-installer in this specific case):

export REPO=tserong
export PUSH=true
make
make build-iso

Then I deployed harvester using the ISO I built… Which leads to the final kink. By default, when you log in to harvester, it will still download the dashboard from releases.rancher.com at runtime in preference to using the dashboard code that’s built into the ISO. To change this, log in to harvester, then go to Advanced > Settings > UI and set “ui-source” to “Bundled” and reload the page. Then you will finally see your changes.

(As mentioned I didn’t build harvester-ui/plugin in my test, so when this cluster is imported into rancher, I don’t see my changes there).

We do plan to refactor parts of the build process to make some of this a bit more straightforward (see [TASK] Packaging ISO from harvester/harvester repo only · Issue #5859 · harvester/harvester · GitHub), but we’re not there yet.

thanks for the reply @tserong :beers:, let me check the solution because I have to install a harvester node in the air-gapped environment (without internet).

1 Like

No problem. I had another thought - depending on exactly what it is you’re trying to achieve, you could look at running BASE=https://YOUR_WEB_SERVER/your-custom-harvester-dashboard ./scripts/build-hosted in the dashboard repo. This will build the dashboard in a subdirectory of dist. You then copy that subdirectory to your web server, change the Harvester ui-index setting to https://YOUR_WEB_SERVER/your-custom-harvester-dashboard/index.html, and your users will load your custom dashboard from that other server. No need to make a new ISO at all, assuming this is all for in house use.

Note: I’ve only tested this quickly, and ran into some Cross-Origin Request Blocked errors loading fonts in the UI, so you might need to mess with that a bit. Also, if you’re using a self-signed SSL cert on your web server, you’ll need to make sure you visit that web server directly in your browser first to accept the certificate, before trying to access Harvester.

HTH

Just another piece of clarification on the above – by default, ui-source is set to “auto”. For official harvester release builds, “auto” will default to using the bundled dashboard. For non-release builds (which is what you get if you’re building something yourself from source), “auto” will default to trying to download the dashboard from releases.rancher.com and will only fall back to the bundled version if that site is inaccessible, hence the likely need to explicitly set ui-source to “bundled”.

@tserong :slightly_smiling_face: In my use case I have to install a harvester in the confidential zone (there is no internet connectivity) for the first time on the signup page it sends the request to releases.rancher.com and without internet, it shows loader only.

is there any way where I can change the default UI setting auto to bundler

I have applied some changes on the UI as well and tried as you suggested above, after the build process when I change auto to bundler via dashboard UI it sends the request to the local IP address after signup process but still changes are not reflected

That’s odd – harvester/pkg/server/ui/ui.go at 8b80f821f9da22a678ecef018a01d0fe241715a4 · harvester/harvester · GitHub should make it fall back to bundled if you’re disconnected. Maybe that doesn’t work on the login screen for some reason (not sure, would need further investigation).

Anyway, if you can ssh into the host, run kubectl -n harvester-system edit settings/ui-source then add a line at the end that says value: bundled. You can confirm it’s set by running kubectl -n harvester-system get settings/ui-source.

@tserong Apart from this can we put some conditions on the code level? I’ll make those changes and we can release this as a feature (for the intranet environment) of Harvester.

I also want to contribute to the Harvester project

Thank you very much for your instant reply :clinking_glasses:

1 Like

Glad I could help, and contributions are of course most welcome :slight_smile:

Please open an issue on GitHub to describe what’s not working and/or what you’d like to see enhanced. It’ll be easier to discuss code changes over there, and of course if you do end up opening pull requests with changes, we need an issue (or issues) to associate the PRs with!