treesit-jump-jump (used to jump to match)
treesit-jump-select (used to select the region of a match)
treesit-jump-delete (used to delete the region of a match)
treesit-jump-parent-jump (used to jump to a parent of the current node under the cursor)
For more information (including installation instructions) see here: https://github.com/dmille56/treesit-jump.
Introducing powershell-ts-mode: a mode for editing powershell files using emacs and treesitter. This includes support for syntax highlighting, imenu, indentation and more. The old powershell-mode has some bugs with handling multi-line comments (ex: “<#/n###>”) and strings (ex: “\”). This fixes those and offers more robust handling by using treesitter instead of regexes for parsing/syntax highlighting.
For more information (including installation instructions see here: https://github.com/dmille56/powershell-ts-mode).
Here’s a link to the git repo containing the conversion: https://github.com/SpartanEngineer/godot3-dodge-haskell. It is mostly a 1-1 translation without relying too much on Haskell tricks. I won’t go over the specifics here since they’re already covered in the original tutorial. This uses version 3.2.1 of Godot.
Use stack new
with the template in the godot-haskell repo.
stack new myproject https://raw.githubusercontent.com/SimulaVR/godot-haskell/master/template/godot-haskell.hsfiles
NOTE
this uses Haskell TypeFamilies language extension._get_node' node name = get_node node `submerge` name >>= _tryCast'
data Mob2
= Mob2
{ _mob2_Base :: GodotRigidBody2D,
_mob2_MobType :: Text,
_mob2_Speed :: Float
}
instance HasBaseClass Mob2 where
type BaseClass Mob2 = GodotRigidBody2D
super = _mob2_Base
instance NativeScript Mob2 where
classInit base = pure $ Mob2 base "fly" 175
classMethods =
[ func NoRPC "_ready" $
\s _ -> do
animated <- _get_node' s "AnimatedSprite" :: IO GodotAnimatedSprite
toLowLevel (_mob2_MobType s) >>= set_animation animated,
func NoRPC "_on_Visibility_screen_exited" $
\s _ -> queue_free s,
func NoRPC "_on_start_game" $
\s _ -> queue_free s
]
You can also add signals via classSignals
and signal
method.
instance NativeScript Hud2 where
classInit base = -- ...
classMethods = -- ...
classSignals =
[ signal "start_game" [] -- [] is of type [(Text, GodotVariantType)]... this represents the args for the signal. Text = name of arg, GodotVariantType = type of arg.
]
You can call a signal via emit_signal
function.
-- emit_signal :: NativeScript a => a -> a -> [(Text, GodotVariantType)] -> IO GodotVariant
-- [(Text, GodotVariant)]... this represents the args for the signal. Text = name of arg, GodotVariantType = type of arg.
emit_signal s gameStr []
exports
and registerClass
methods to allow usage in Godot from the built libary.exports :: GdnativeHandle -> IO ()
exports desc = do
registerClass $ RegClass desc $ classInit @Player
registerClass $ RegClass desc $ classInit @Mob
-- ...
registerClass $ RegClass desc $ classInit @MyClass
Here’s a suggested workflow:
Here’s a tip you can use with a working IDE set up or a GHCi repl.
then check the type of xyz to see what arguments it is expecting next.
Use TVar
to read / write the variable in conjunction with atomically
.
fromLowLevel
& toLowLevel
functionsfromGodotVariant
/ toVariant
to convert to / from godot variantsasNativeScript
to convert to NativeScript Haskell typeUse godot_global_get_singleton
db <- Api.godot_global_get_singleton & withCString (unpack "ClassDB") >>= tryCast :: IO (Maybe Godot_ClassDB)
Use instance'
on the global godot ClassDB
db <- Api.godot_global_get_singleton & withCString (unpack "ClassDB") >>= tryCast :: IO (Maybe Godot_ClassDB)
case db of
Just classDb -> do
cName <- toLowLevel "RandomNumberGenerator" -- Name of the Godot class to make
cls <- instance' classDB cName >>= fromGodotVariant :: IO GodotObject
rng <- tryCast :: IO (Maybe GodotRandomNumberGenerator)
return (rng)
Nothing -> error "Unable to load global class db"
Use load on the global godot ResourceLoader (you may want to check that the resource exists via exists
function first)
instance'
on this to make a new instance of it.rlMaybe <- Api.godot_global_get_singleton & withCString (unpack "ResourceLoader") >>= tryCast :: IO (Maybe Godot_ResourceLoader)
case rlMaybe of
Just rl -> do
cName <- toLowLevel "PackedScene" -- Name of the Godot class to make
url <- toLowLevel "res://Mob2.tscn" -- Path to load
exist <- exists rl url clsName
case exist of
True -> do
r <- load rl url clsName False :: IO (Maybe GodotResource)
return (r)
False -> error "Unable to load class at the url inputted"
Nothing -> error "Unable to load global resource loader"
Dynamically load via the resource loader, create an instance of it, and then finally call asNativeScript
on it to convert it to your NativeScript type.
mobPackedSceneMaybe <- load' "PackedScene" "res://Mob2.tscn" >>= tryCast :: IO (Maybe GodotPackedScene)
case mobPackedSceneMaybe of
Just mobPackedScene -> do
mobObj <- instance' mobPackedScene 0
mob2 <- asNativeScript (safeCast mobObj) :: IO (Maybe Mob2)
return (mob2)
Nothing -> error "Unable to load: res://Mob2.tscn"
You can call a signal via emit_signal
function.
-- emit_signal :: NativeScript a => a -> a -> [(Text, GodotVariantType)] -> IO GodotVariant
-- [(Text, GodotVariant)]... this represents the args for the signal. Text = name of arg, GodotVariantType = type of arg.
emit_signal s gameStr []
You can connect a godot signal via connect
function.
-- connect :: NativeScript a => a -> GodotString -> GodotObject -> GodotString -> GodotArray -> Int -> IO Int
connect hud2 startGameStr (safeCast mob2) onStartGameStr gArr 0
Use one of tryCast
, tryObjectCast
, or safeCast
Use get_node
. This can take a path to a node as well (relative or absolute). See: https://docs.godotengine.org/en/stable/classes/class_node.html#class-node-method-get-node.
startPositionStr <- toLowLevel "StartPosition" :: IO GodotString
startPosition <- get_node s startPositionStr >>= tryCast :: IO (Maybe GodotPosition2D)
I’ve used async
in Haskell instead without issues so far. It’s not the same but it allows for asyncronous behaviour when needed.
Let’s first start with a description of what the final configuration will look like. The server will have a git user named git
that will handle all the git operations. The git repositories themselves will be stored in the git user’s home directory. The git users login shell will be set to the git-shell
. Git-shell is a shell that comes with git that only allows running git commands. This will prevent git users from doing anything fishy on the server. Cloning git repositories from the git server will be as simple as:
git clone git@<IP_ADDRESS_OF_SERVER>:<NAME_OF_GIT_REPOSITORY>
We will supply our server with shell scripts that can be used to backup repositories, create a new repository, delete a repository, and list all the git repositories stored on the server. A cron job will be configured to run the backup script once a day. Nix will be used to package our shell scripts and if a change is made to any of the scripts we can use NixOps to deploy the changes for us with no change to our configuration files.
The git-server.nix file contains the configuration of our network (which in this case is simply the git server itself). It is as follows:
{
network.description = "Git server";
git-server =
{ config, pkgs, ... }:
let
repos-dir = "/home/git"; #set up a directory to hold the git repos, this will also be the git users home directory
repos = import ./repos-packages.nix { inherit pkgs repos-dir; };
in
{
imports = [
./virtualbox.nix
];
time.timeZone = "UTC";
services.openssh.enable = true;
services.cron.enable = true;
services.cron.systemCronJobs = [ "30 9 * * * root repos-backup" ]; #run repos-backup once a day at 9:30
nix.gc.automatic = true;
environment.systemPackages = with pkgs;
[
vim git
#custom scripts
repos.repos-backup repos.repos-create repos.repos-delete repos.repos-list repos.repos-setenvvars
];
users.mutableUsers = false;
users.users.root.openssh.authorizedKeys.keys = [
"your public rsa key goes here"
];
users.users.git = {
isNormalUser = true;
description = "git user";
createHome = true;
home = "${repos-dir}";
shell = "${pkgs.git}/bin/git-shell";
openssh.authorizedKeys.keys = ["your public rsa key goes here"];
};
};
}
Let’s go through this config piece by piece to explain what is going on.
Describe the network as well as define the git-server (the only server in our network).
{ config, pkgs, ... }:
let
repos-dir = "/home/git"; #set up a directory to hold the git repos, this will also be the git users home directory
repos = import ./repos-packages.nix { inherit pkgs repos-dir; };
Input the config/pkgs in nix. The let binding here defines variables to be used in the following block of code. repos-dir
defines the directory to store the git repositories. repos
contains the nix derivations that build our shell scripts that are imported from repos-packages.nix
(which we will write later). We need to give it the pkgs & repos-dir (so that the directory to store git repositories in is defined in one place & easily changeable in the configuration).
Begin the block of code defining the git server.
Import the VirtualBox deployment (which we will write later). This can be swapped out for another deployment method (for example if you wanted to deploy to the cloud instead).
time.timeZone = "UTC";
services.openssh.enable = true;
services.cron.enable = true;
services.cron.systemCronJobs = [ "30 9 * * * root repos-backup" ]; #run repos-backup once a day at 9:30
Set the timezone to UTC (Note: the timezone must be set or cron won’t work right with NixOs). Feel free to change it to a different time zone. Enable openssh so that your site can be connected to with SSH. Enable cron and set it up with a job to backup all of your git repositories once a day at 9:30. repos-backup
is the name of the script to back up the git repositories we will write later.
nix.gc.automatic = true;
environment.systemPackages = with pkgs;
[
vim git
#custom scripts
repos.repos-backup repos.repos-create repos.repos-delete repos.repos-list repos.repos-setenvvars
];
Tell NixOs to automatically run Garbage Collection (this removes unused packages from the system periodically). environment.systemPackages
is where we tell Nix which programs we would like installed and available on the system path. We’ll tell it that we would like to have vim & git installed as well as all of our custom scripts that we’ll define later. Vim is not necessary, but is useful for editing/viewing of files.
This sets it so that our users can only be configured through our configuration file.
Input the public rsa keys that can be used to log in as root to openssh. Make sure to set this to your key or you won’t be able to log in.
users.users.git = {
isNormalUser = true;
description = "git user";
createHome = true;
home = "${repos-dir}";
shell = "${pkgs.git}/bin/git-shell";
openssh.authorizedKeys.keys = ["your public rsa key goes here"];
};
This defines a user called git
for the server. This user will be used for all git functionality. It is a normal user & its home directory is where we will store all of the git repositories. ${repos-dir}
will evaluate the repos-dir variable declared earlier. We make sure to set the git users login shell to the git-shell
so that it can’t run any non git commands. ${pkgs.git}
will evaluate to the directory that git is installed into in NixOs. This ensures that the correct path to git-shell is used. openssh.authorizedKeys.keys
contains the public rsa keys that can be used to run git commands on the server. Put whichever keys in here you would like to grant access to. If you don’t put any keys here no one will be able to use the git server.
This is actually pretty simple.
{pkgs, ...}: let
targetEnv = "virtualbox";
virtualbox = {
memorySize = 1024;
vcpu = 1;
headless = true;
};
in {
deployment = {
targetEnv = targetEnv;
virtualbox = virtualbox;
};
}
Feel free to change the options (ie. memory, cpus used, or whether or not it’s headless) as you wish. deployment
refers to the environment used to deploy the server.
This config file defines Nix derivations for our shell scripts used to help manage our git server. These derivations essentially tell Nix how to build our scripts. In our case building will be fairly simple as we only need to copy the scripts over to the server and ensure that they are on the system path.
{ pkgs ? import <nixpkgs> {}, repos-dir ? "/home/git" }:
let
stdenv = pkgs.stdenv;
sh = pkgs.sh;
coreutils = pkgs.coreutils;
in {
repos-backup = stdenv.mkDerivation rec {
name = "repos-backup";
builder = "${sh}/bin/sh";
args = [ ./shell-script-builder.sh ];
src = ./repos-backup.sh;
buildInputs = [ coreutils ];
system = builtins.currentSystem;
};
repos-create = stdenv.mkDerivation rec {
name = "repos-create";
builder = "${sh}/bin/sh";
args = [ ./shell-script-builder.sh ];
src = ./repos-create.sh;
buildInputs = [ coreutils ];
system = builtins.currentSystem;
};
repos-delete = stdenv.mkDerivation rec {
name = "repos-delete";
builder = "${sh}/bin/sh";
args = [ ./shell-script-builder.sh ];
src = ./repos-delete.sh;
buildInputs = [ coreutils ];
system = builtins.currentSystem;
};
repos-list = stdenv.mkDerivation rec {
name = "repos-list";
builder = "${sh}/bin/sh";
args = [ ./shell-string-script-builder.sh ];
src = ''
#!/bin/sh
. repos-setenvvars
ls -d $reposDir/*.git | xargs -n1 basename
'';
buildInputs = [ coreutils ];
system = builtins.currentSystem;
};
repos-setenvvars = stdenv.mkDerivation rec {
name = "repos-setenvvars";
builder = "${sh}/bin/sh";
args = [ ./shell-string-script-builder.sh ];
src = ''
#!/bin/sh
# set environment variables for use in repos scripts
reposDir="${repos-dir}" #directory containing the git repos
reposBackupDir=$reposDir/repobackups #directory containing the git repos backups
'';
buildInputs = [ coreutils ];
system = builtins.currentSystem;
};
}
Lets go through this line by line.
Input two optional variables into our Nix function. These determine what package repository to use (defaulting to nixpkgs) and what the repos-dir should be set to (defaulting to “/home/git”).
Define variables in a let block for use in the upcoming code block. These are simply used to define a shorthand way of referring to a few packages.
Start the code block.
repos-backup = stdenv.mkDerivation rec {
name = "repos-backup";
builder = "${sh}/bin/sh";
args = [ ./shell-script-builder.sh ];
src = ./repos-backup.sh;
buildInputs = [ coreutils ];
system = builtins.currentSystem;
};
repos-create = stdenv.mkDerivation rec {
name = "repos-create";
builder = "${sh}/bin/sh";
args = [ ./shell-script-builder.sh ];
src = ./repos-create.sh;
buildInputs = [ coreutils ];
system = builtins.currentSystem;
};
repos-delete = stdenv.mkDerivation rec {
name = "repos-delete";
builder = "${sh}/bin/sh";
args = [ ./shell-script-builder.sh ];
src = ./repos-delete.sh;
buildInputs = [ coreutils ];
system = builtins.currentSystem;
};
Define Nix derivations for our backup, create, and delete scripts for handling git repositories. These are all fairly similar except they have different names & scripts. name
is the name of the derivation. This will also end up being the name of the command that can be run in the terminal to run the script. builder
is what will be used to run the builder script. In our case it is sh. args
contains the script that will be run by sh in order to build the Nix derivation. we’ll define shell-script-builder.sh
in a moment. src
is the source to be built. The shell scripts in all of these scripts will be defined later in the post. buildInputs
contains the packages that are needed in order to build this derivation. coreutils
contains basic commands like cp
that we need.
Let’s go over the shell-script-builder.sh
script so that we can understand how it is doing the building.
#!/bin/sh
set -e
unset PATH
for p in $buildInputs; do
export PATH=$p/bin${PATH:+:}$PATH
done
mkdir -p $out/bin
cp $src $out/bin/$name
set -e
makes it so that any subsequent commands that fail will make the script fail. Next we build a PATH so that the correct binaries are on it. First clear the path with unset. The for loop adds the bin directory for each pkg in buildInputs to our path. This way the correct binaries are available during our build stage.
The out
variable contains the output directory path for our derivation. This variable is set by Nix. We use mkdir to create a bin directory in the output directory for our script.
Note:
Executables/scripts must be put in the bin directory of the derivation or they won’t be put on the system path (this took me quite a while to figure out so I’m pointing it out to you).
Finally copy the the src code file to the bin directory of the derivation with the name defined in the derivation with the cp
command.
Ok with that out of the way let’s go back to the rest of the repos-packages.nix file.
repos-list = stdenv.mkDerivation rec {
name = "repos-list";
builder = "${sh}/bin/sh";
args = [ ./shell-string-script-builder.sh ];
src = ''
#!/bin/sh
. repos-setenvvars
ls -d $reposDir/*.git | xargs -n1 basename
'';
buildInputs = [ coreutils ];
system = builtins.currentSystem;
};
repos-setenvvars = stdenv.mkDerivation rec {
name = "repos-setenvvars";
builder = "${sh}/bin/sh";
args = [ ./shell-string-script-builder.sh ];
src = ''
#!/bin/sh
# set environment variables for use in repos scripts
reposDir="${repos-dir}" #directory containing the git repos
reposBackupDir=$reposDir/repobackups #directory containing the git repos backups
'';
buildInputs = [ coreutils ];
system = builtins.currentSystem;
};
This defines the Nix derivations for our setenvvars & list scripts. The repos-list
script simply lists out the current git repositories using the ls
command. The repos-setenvvars
script defines two variables containing the location of the git repositories as well as the location of the git repositories backups to be called by the other scripts. There’s two key differences here between these derivations are our other three derivations.
repos-setenvvars
script in our configuration file allows the location of the reposDir to be defined in the Nix configuration. This keeps us from having to keep multiple files in multiple locations synced. The other scripts can then call the repos-setenvvars
scripts to keep the locations consistent.shell-script-builder.sh
has been replaced with shell-string-script-builder.sh
which is used to build these derivations. It is similar to shell-script-builder.sh with some small differences.shell-string-script-builder.sh
is as follows:
#!/bin/sh
set -e
unset PATH
for p in $buildInputs; do
export PATH=$p/bin${PATH:+:}$PATH
done
mkdir -p $out/bin
echo "$src" &> $out/bin/$name
chmod a+xr $out/bin/$name
This is very similar to shell-script-builder.sh
, but here we echo the contents of the src variable (which contains the contents of the shell script that is set by Nix) to a file. We must then make sure to call chmod on the created file to give the newly created file read and executable rights (else you won’t be able to run the script).
A test-build for
repos-packages.nix
can be done using nix-build with the following command:
nix-build repos-packages.nix
Let’s go ahead and define the repos-backup.sh
, repos-create.sh
, and repos-delete.sh
scripts. These will be similar to their names and will backup your repositories, create a new repository, & delete a repository. On the git server these scripts will be ran as such: repos-backup
, repos-create
, & repos-delete
.
The repos-backup.sh
script is as follows:
#!/bin/sh
#backs up git repos in your repo directory
. repos-setenvvars #set environment variables
if [ -z "$reposDir" ] || [ ! -d "$reposDir" ]; then
echo "exiting, can't find reposDir or env variables not set"
exit 1
fi
mkdir -p $reposBackupDir
for repo in $(ls -d $reposDir/*.git)
do
repobase=$(basename $repo)
checkSumFile=$reposBackupDir/sha-$repobase.sha256
checkSumFileContents=$(cat $checkSumFile)
checkSum=$(find $repo -type f -exec sha256sum {} \; | sha256sum)
#check the old checksum vs the new one to see if the repo has changed
if [ "$checkSum" == "$checkSumFileContents" ]
then
echo "repo: '$repo' has not changed..."
#don't need to do anything as the repo hasn't been updated
else
#tar the repo and then back it up to the cloud... remove the tar file when we're done
echo "repo: '$repo' has changed... backing up..."
tarFile=$reposBackupDir/$repobase.tar.gz
tar -zcvf $tarFile $repo
#TODO: transfer the tar file to cloud storage here
rm -f $tarFile
echo "finished backing up repo: '$repo'"
fi
#update the checksums
rm -f $checkSumFile
echo "$checkSum" &> $checkSumFile
done
The script first loads the repos-setenvvars script in order to set the environment variables. It exits if either the reposDir variable is not set or the location referenced by reposDir does not exist on the file system. Next it makes the repos backup directory if it does not exist via mkdir
.
Loop through each git repo in reposDir. Each git repo ends with the prefix .git
.
repobase=$(basename $repo)
checkSumFile=$reposBackupDir/sha-$repobase.sha256
checkSumFileContents=$(cat $checkSumFile)
checkSum=$(find $repo -type f -exec sha256sum {} \; | sha256sum)
For each repo we’ll compute a checksum based upon the repos contents and store it in a file. If the current computed checksum is different then the stored checksum then it means the repo has changed and that we’ll need to back it up. The checkSum variable computes the current checksum of the repo (using a sha256 hash) & the checkSumFileContents contains the old checksum of the repo.
#check the old checksum vs the new one to see if the repo has changed
if [ "$checkSum" == "$checkSumFileContents" ]
then
echo "repo: '$repo' has not changed..."
#don't need to do anything as the repo hasn't been updated
else
#tar the repo and then back it up to the cloud... remove the tar file when we're done
echo "repo: '$repo' has changed... backing up..."
tarFile=$reposBackupDir/$repobase.tar.gz
tar -zcvf $tarFile $repo
#TODO: transfer the tar file to cloud storage here
rm -f $tarFile
echo "finished backing up repo: '$repo'"
fi
Check if the checksums match. If they do we don’t have to do anything. If they don’t then we’ll use tar to create a compressed tar archive and then back it up as a change has occurred.
Note:
You’ll need to write code to transfer the tar archive to your storage and place it where the todo note is. Otherwise it won’t actually back up anything.
After transferring the tar file we remove it so that we’re not taking up unnecessary space on the server.
Finally make sure to update the checkSumFile with a hash of the repositories current contents.
The repos-create.sh
script is as follows:
#!/bin/sh
#creates a git repo in your repo directory
#input a cli argument with the git repo to create
set -e
. repos-setenvvars #set environment variables
if [ -z "$reposDir" ] || [ ! -d "$reposDir" ]; then
echo "exiting, can't find reposDir or env variables not set"
exit 1
fi
if [ -z "$1" ]; then
echo "exiting, no repo name input to create"
echo "usage: 'repos-create <name-of-repo-to-create>'"
exit 1
fi
case "$1" in
*.git)
newRepoDir=$reposDir/$1 ;;
*)
newRepoDir=$reposDir/$1.git ;;
esac
if [ -d "$newRepoDir" ]; then
echo "repo: '$newRepoDir' already exists, exiting..."
exit 1
else
echo "creating new repo: '$newRepoDir'"
mkdir -p $newRepoDir
cd $newRepoDir
git init --bare
echo "new repo: '$newRepoDir' created"
fi
This script starts off the same way as the last one by getting setting the environment variables appropriately.
if [ -z "$1" ]; then
echo "exiting, no repo name input to create"
echo "usage: 'repos-create <name-of-repo-to-create>'"
exit 1
fi
If there is no repo name input into the script as a command line argument then quit & print the usage. $1 refers to the first command line argument.
Set up a variable containing the path of the new git repo. This will also ensure that the repo ends in .git
by adding it to the end if necessary. This allows the script to accept an argument that may or may not end in .git and standardize it to end in .git.
if [ -d "$newRepoDir" ]; then
echo "repo: '$newRepoDir' already exists, exiting..."
exit 1
else
echo "creating new repo: '$newRepoDir'"
mkdir -p $newRepoDir
cd $newRepoDir
git init --bare
echo "new repo: '$newRepoDir' created"
fi
Create the a new git repo if it does not exists. All that needs to be done is creating a directory to hold the repo and then calling git init --bare
inside the newly created directory. This is the git command to initialize an empty repository.
The repos-delete.sh
script is as follows:
#!/bin/sh
#deletes a git repo in your repo directory
#input a cli argument with the git repo to delete
set -e
. repos-setenvvars #set environment variables
if [ -z "$reposDir" ] || [ ! -d "$reposDir" ]; then
echo "exiting, can't find reposDir or env variables not set"
exit 1
elif [ -z "$1" ]; then
echo "exiting, no repo name input to delete"
echo "usage: 'repos-delete <name-of-repo-to-delete>'"
exit 1
fi
case "$1" in
*.git)
deleteRepoDir=$reposDir/$1
deleteRepoBackupHash=$reposDir/sha-$1.sha256 ;;
*)
deleteRepoDir=$reposDir/$1.git
deleteRepoBackupHash=$reposBackupDir/sha-$1.git.sha256 ;;
esac
if [ ! -d "$deleteRepoDir" ]; then
echo "exiting, git repo at: '$deleteRepoDir' does not exist"
exit 1
fi
#ask for confirmation that the use actually wants to delete the repo
read -p "Confirmation, would you like to delete the git repo at: '$deleteRepoDir' (y/n)? " choice
case "$choice" in
y|Y ) echo "Yes selected, deleting: '$deleteRepoDir'" ;;
n|N ) echo "No selected, exiting" && exit 0 ;;
* ) echo "invalid option, exiting" && exit 1 ;;
esac
rm -rf $deleteRepoDir
if [ -f "$deleteRepoBackupHash" ]; then
rm $deleteRepoBackupHash
fi
echo "git repo '$deleteRepoDir' has been deleted"
This script again starts off by sourcing the environment variables. It then checks to make sure that a repo name was input into the script as a command line argument (else it exits and prints usage).
case "$1" in
*.git)
deleteRepoDir=$reposDir/$1
deleteRepoBackupHash=$reposDir/sha-$1.sha256 ;;
*)
deleteRepoDir=$reposDir/$1.git
deleteRepoBackupHash=$reposBackupDir/sha-$1.git.sha256 ;;
esac
This sets the variables containing the git repo directory to delete (as well as the backup hash to delete). Similarly to the create script this allows us to accept command line arguments that may or may not end in .git.
if [ ! -d "$deleteRepoDir" ]; then
echo "exiting, git repo at: '$deleteRepoDir' does not exist"
exit 1
fi
Check that the repo to delete actually exists. If it doesn’t exist then exit the script.
Now that all of the files needed for the server have been defined let’s go over how to create a deployment with NixOps. To create a deployment for our git server use the create command:
nixops create git-server.nix -d git-server
This creates a deployment using our git-server.nix configuration named git-server
. To build & deploy the configuration you’ll need to use the deploy command:
nixops deploy -d git-server
The first time running the deploy command will provision a virtual machine using VirtualBox & then build/deploy the nix configuration to the virtual machine. After deploying you can use the info command to get info about the current deployments:
nixops info
Take note of the IP Address listed here as you’ll need it in order to connect to the server. In the future if you need to make any modifications to the configuration you’ll first need to edit the git-server.nix file and then run the modify command:
nixops modify git-server.nix -d git-server
Running the modify command will only modify the configuration it will not deploy it. In order to deploy the newly modified configuration you’ll need to run the deploy command after the modify command.
Now that the server is running you should be able to ssh in as the root user and use the repos-create
command to create a new git repository. After the repository is created you should be able to clone it on your local computer by running:
git clone git@<IP_ADDRESS_OF_SERVER>:<NAME_OF_GIT_REPO>
Then you should be able to run all local git commands & then push the changes to the server.
In the future if you want to delete the git server you can use the nixops destroy -d git-server
& nixops delete -d git-server
commands.
Let’s start off by installing Visual Studio Code. Head to https://code.visualstudio.com/ and download the correct version for your operating system and install it.
Next let’s go ahead and go to https://www.haskell.org/platform/ and download the latest version of the Haskell Platform for your operating system and install it (in some cases you can simply install it with your OS’s package manager). The Haskell Platform includes GHC, Cabal, and Stack (all of which we need).
Load up a terminal window (or command-prompt in windows) and try the following commands (to output the version of each) to make sure they all installed successfully.
Test for GHC’s version: ghc --version
Test for Cabal’s version: cabal --version
Test for Stack’s version: stack --version
If any of them did not install (ie. the command is unrecognized) then you’ll need to go to their respective page (GHC, Cabal, Stack) and follow the instructions.
Load up Visual Studio Code. Click on the Extensions Icon of the far left (it looks like a square and is at the bottom of the icons list). Search for the Haskero extension. Install Haskero by clicking on the install button. Exit out of Visual Studio Code after installing Haskero.
Open up a terminal window and navigate to the directory you wish to create your haskell project in. Type the following command:
stack new MyFirstHaskellProject new-template
This creates a stack project named MyFirstHaskellProject
using the new-template
template. This creates a directory named MyFirstHaskellProject
in the working directory which contains the project. MyFirstHaskellProject
can be changed to whatever name desired. Different templates can be used other then the new-template
template when started a new project. The templates available can be listed out by typing stack templates
in the terminal.
NOTE: you may need to upgrade your stack version & try again if the following steps fail.
Now that we’ve made a project go ahead and change directories to the MyFirstHaskellProject
directory created by stack in the terminal. First run stack setup
to setup stack. Next run stack build
to build the project. This will build the project. Now run stack build intero
to setup Intero for the project. This will allow Haskero to work correctly. Finally run code .
to load up the project in Visual Studio Code. Haskero should now be working correctly. If Haskero is having issues finding stack then you’ll need to make sure that stack is on the system path (or mess with the Haskero configuration so that it has the correct location to run stack from). You can use the command stack build --exec MyFirstHaskellProject
to compile and then run the built executable.
In the project you should see a few files & directories have been created for you. MyFirstHaskellProject.cabal
& stack.yaml
are filled with settings used when building the project. app/Main.hs
is the Haskell file containing the entry point of the application. The src
& test
directories are to contain source code and test code respectively.
Here’s a list of what you need to do in the future when starting a new project.
stack new MyFirstHaskellProject new-template
to initialize a new project.stack build
to do an initial build of the project.stack build intero
to add Intero to the project.code .
to start Visual Studio Code with the project loaded up.Before we start here’s an image depicting the graph we will test Dijkstra’s Algorithm on:
Declare the constructor:
Declare the seven nodes to be held in the graph. Each node is named n“the number it contains” to allow us to easily remember which node is which.
Node<Integer> n0 = new Node<Integer>(0);
Node<Integer> n1 = new Node<Integer>(1);
Node<Integer> n2 = new Node<Integer>(2);
Node<Integer> n3 = new Node<Integer>(3);
Node<Integer> n4 = new Node<Integer>(4);
Node<Integer> n5 = new Node<Integer>(5);
Node<Integer> n6 = new Node<Integer>(6);
Put each node into our graph:
graph.put(n0.getIndex(), n0);
graph.put(n1.getIndex(), n1);
graph.put(n2.getIndex(), n2);
graph.put(n3.getIndex(), n3);
graph.put(n4.getIndex(), n4);
graph.put(n5.getIndex(), n5);
graph.put(n6.getIndex(), n6);
Add the edges between each of the nodes in our graph:
n0.addEdge(n1, 5);
n0.addEdge(n2, 4);
n1.addEdge(n3, 2);
n1.addEdge(n2, 1);
n2.addEdge(n3, 4);
n3.addEdge(n4, 10);
n4.addEdge(n5, 6);
n5.addEdge(n6, 3);
Finally print out the shortest path between node 0 and each of the rest of the nodes:
System.out.println("-------------------------------------");
System.out.println("Dijkstra's algorithm test:");
System.out.println("n0 -> n1: " + getShortestPath(n0, n1));
System.out.println("n0 -> n2: " + getShortestPath(n0, n2));
System.out.println("n0 -> n3: " + getShortestPath(n0, n3));
System.out.println("n0 -> n4: " + getShortestPath(n0, n4));
System.out.println("n0 -> n5: " + getShortestPath(n0, n5));
System.out.println("n0 -> n6: " + getShortestPath(n0, n6));
The finished constructor is as follows:
public DijkstrasAlgoExample() {
Node<Integer> n0 = new Node<Integer>(0);
Node<Integer> n1 = new Node<Integer>(1);
Node<Integer> n2 = new Node<Integer>(2);
Node<Integer> n3 = new Node<Integer>(3);
Node<Integer> n4 = new Node<Integer>(4);
Node<Integer> n5 = new Node<Integer>(5);
Node<Integer> n6 = new Node<Integer>(6);
graph.put(n0.getIndex(), n0);
graph.put(n1.getIndex(), n1);
graph.put(n2.getIndex(), n2);
graph.put(n3.getIndex(), n3);
graph.put(n4.getIndex(), n4);
graph.put(n5.getIndex(), n5);
graph.put(n6.getIndex(), n6);
n0.addEdge(n1, 5);
n0.addEdge(n2, 4);
n1.addEdge(n3, 2);
n1.addEdge(n2, 1);
n2.addEdge(n3, 4);
n3.addEdge(n4, 10);
n4.addEdge(n5, 6);
n5.addEdge(n6, 3);
System.out.println("-------------------------------------");
System.out.println("Dijkstra's algorithm test:");
System.out.println("n0 -> n1: " + getShortestPath(n0, n1));
System.out.println("n0 -> n2: " + getShortestPath(n0, n2));
System.out.println("n0 -> n3: " + getShortestPath(n0, n3));
System.out.println("n0 -> n4: " + getShortestPath(n0, n4));
System.out.println("n0 -> n5: " + getShortestPath(n0, n5));
System.out.println("n0 -> n6: " + getShortestPath(n0, n6));
}
The output of running the code is:
-------------------------------------
Dijkstra's algorithm test:
n0 -> n1: 5
n0 -> n2: 4
n0 -> n3: 7
n0 -> n4: 17
n0 -> n5: 23
n0 -> n6: 26
The finished DijkstrasAlogExample class is as follows:
package com.spartanengineer.datastructures;
import java.util.Map;
import java.util.PriorityQueue;
import java.util.Set;
import javafx.util.Pair;
import java.util.Comparator;
import java.util.HashMap;
import java.util.HashSet;
public class DijkstrasAlgoExample {
private static int nodeIndex = 0;
// undirected graph example
private class Node<U> {
private U data;
private Map<Node<U>, Integer> edges;
private int index = -1;
public Node(U data) {
this.data = data;
edges = new HashMap<Node<U>, Integer>();
this.index = nodeIndex;
nodeIndex += 1;
}
public U getData() {
return data;
}
public void setData(U data) {
this.data = data;
}
public Map<Node<U>, Integer> getEdges() {
return edges;
}
public void setEdges(Map<Node<U>, Integer> edges) {
this.edges = edges;
}
public int getIndex() {
return index;
}
public void setIndex(int index) {
this.index = index;
}
/**
* Add an undirected edge, will replace an already existing edge between the two nodes
*/
public void addEdge(Node<U> node, int weight) {
edges.put(node, weight);
if(!node.getEdges().containsKey(this)) {
node.addEdge(this, weight);
} else {
if(node.getEdges().get(this) != weight) {
node.addEdge(this, weight);
}
}
}
}
// Used to allow our priority queue to order edge pairs correctly
private class EdgePairComparator implements Comparator<Pair<Node<Integer>, Integer>> {
@Override
public int compare(Pair<Node<Integer>, Integer> o1, Pair<Node<Integer>, Integer> o2) {
return o1.getValue().compareTo(o2.getValue());
}
}
private Map<Integer, Node<Integer>> graph = new HashMap<>();
public DijkstrasAlgoExample() {
Node<Integer> n0 = new Node<Integer>(0);
Node<Integer> n1 = new Node<Integer>(1);
Node<Integer> n2 = new Node<Integer>(2);
Node<Integer> n3 = new Node<Integer>(3);
Node<Integer> n4 = new Node<Integer>(4);
Node<Integer> n5 = new Node<Integer>(5);
Node<Integer> n6 = new Node<Integer>(6);
graph.put(n0.getIndex(), n0);
graph.put(n1.getIndex(), n1);
graph.put(n2.getIndex(), n2);
graph.put(n3.getIndex(), n3);
graph.put(n4.getIndex(), n4);
graph.put(n5.getIndex(), n5);
graph.put(n6.getIndex(), n6);
n0.addEdge(n1, 5);
n0.addEdge(n2, 4);
n1.addEdge(n3, 2);
n1.addEdge(n2, 1);
n2.addEdge(n3, 4);
n3.addEdge(n4, 10);
n4.addEdge(n5, 6);
n5.addEdge(n6, 3);
System.out.println("-------------------------------------");
System.out.println("Dijkstra's algorithm test:");
System.out.println("n0 -> n1: " + getShortestPath(n0, n1));
System.out.println("n0 -> n2: " + getShortestPath(n0, n2));
System.out.println("n0 -> n3: " + getShortestPath(n0, n3));
System.out.println("n0 -> n4: " + getShortestPath(n0, n4));
System.out.println("n0 -> n5: " + getShortestPath(n0, n5));
System.out.println("n0 -> n6: " + getShortestPath(n0, n6));
}
public int getShortestPath(Node<Integer> start, Node<Integer> end) {
//keeps track of the distance between this node and every other node
Map<Integer, Integer> distances = new HashMap<>();
for(Node<Integer> n : graph.values())
distances.put(n.getIndex(), Integer.MAX_VALUE);
//keeps track of which nodes we have visited
Set<Integer> visited = new HashSet<Integer>();
//declare a priority queue which will be used to help find the shortest path to each node
PriorityQueue<Pair<Node<Integer>, Integer>> queue = new PriorityQueue<>(new EdgePairComparator());
//initially load the priority queue with the start node (as that is where we start!!!)
Pair<Node<Integer>, Integer> startPair = new Pair<>(start, 0);
queue.add(startPair);
//when the queue is empty we will have found the shortest distance from the start node to all other nodes
while(!queue.isEmpty()) {
//the pair at the front of the queue will be the current node to visit
Pair<Node<Integer>, Integer> pair = queue.remove();
Node<Integer> node = pair.getKey();
Integer weight = pair.getValue();
int nodeIndex = node.getIndex();
if(weight < distances.get(nodeIndex)) {
//if a shorter path has been found then update the distance
distances.put(nodeIndex, weight);
}
//visit all the adjacent nodes to the node currently being visited
if(!visited.contains(nodeIndex)) {
visited.add(nodeIndex); //mark off this node so that we don't have to visit it again
Map<Node<Integer>, Integer> edges = node.getEdges();
for(Node<Integer> edgeNode : edges.keySet()) {
int edgeWeight = edges.get(edgeNode);
Pair<Node<Integer>, Integer> edgePair = new Pair<>(edgeNode, weight+edgeWeight);
queue.add(edgePair);
}
}
}
return distances.get(end.getIndex());
}
}
Next we need to define a class that implements a comparator for pairs of nodes and integers. We need to do this so that we can use a priority queue of pairs of nodes and integers in our shortest path implementation. We need to have them order from smallest integer to largest integer. The integer in this pairing refers to the distance from the start node. The implementation of this is as follows:
private class EdgePairComparator implements Comparator<Pair<Node<Integer>, Integer>> {
@Override
public int compare(Pair<Node<Integer>, Integer> o1, Pair<Node<Integer>, Integer> o2) {
return o1.getValue().compareTo(o2.getValue());
}
}
Now we can begin to write our shortest path function. It will input the starting node and ending node and return an integer containing the length of the shortest path between the two nodes. The function definition is as follows:
First we’ll declare a map of integer to integer called distances which will contain the distance to each node. The key of this map will be the index of the node and the value will be the distance to that node from the start node. We’ll initialize the distance of each node to be equal to Integer.MAX_VALUE as infinity isn’t an option for each node in our graph.
Map<Integer, Integer> distances = new HashMap<>();
for(Node<Integer> n : graph.values())
distances.put(n.getIndex(), Integer.MAX_VALUE);
Declare a set of integer’s called visited. This set will keep track of which nodes have been visited. It is initially empty. As a node is visited it will be added to the set.
Declare a priority queue containing pairs of nodes and integers. It needs to be declared with the EdgePairComparator we defined earlier. This priority queue will contain the nodes and the distance of that node from the start node. The item at the front of the queue will by definition be the node with the shortest distance. This property is vital to Dijkstra’s algorithm.
Initially load the queue with the start node by declaring a new Pair containing the start node as well as the distance from the start node (0 as it is the start node). Add this startPair to the queue that was just declared.
Next we’ll iterate through the queue removing nodes off of it one at a time and visiting them (updating our distances map in the process). As each node is visited make sure to add it to the queue so that it also can be visited (if it has not already been visited). When the queue is finally empty the algorithm has finished processing. Start off by declaring a while loop that runs as long as the queue is not empty:
Remove the first pair from the queue. Get the weight, node, and nodeIndex from the pair. The weight represents the distance that node is from the start node.
Pair<Node<Integer>, Integer> pair = queue.remove();
Node<Integer> node = pair.getKey();
Integer weight = pair.getValue();
int nodeIndex = node.getIndex();
If the weight is less then the distance in our distances map then update the distance to the correct value.
Next we need to add all of the edges in the currently visited node (if the currently visited node has not been visited) to the priority queue with a distance of the current weight + the weight of the edge. First make sure the node has not already been visited:
Add the node index to the visited set to mark of the node so that it isn’t visited again.
Get the edges for the currently visited node:
Loop through the all of the nodes in the edges:
Finally get the weight of the edge, create a new edgePair containing the edgeNode and the current distance for the pair (ie the weight + the edgeWeight), and add the pair to the queue so that we can visit it.
int edgeWeight = edges.get(edgeNode);
Pair<Node<Integer>, Integer> edgePair = new Pair<>(edgeNode, weight+edgeWeight);
queue.add(edgePair);
Finally after the while loop has been processed, we need to return the distance between the start node and the end node at the end of the function. This distance will be contained in the distances map filled in in the while loop. If the node is unreachable then the distance will be equal to Integer.MAX_VALUE.
The finished getShortestPath() function is as follows:
public int getShortestPath(Node<Integer> start, Node<Integer> end) {
//keeps track of the distance between this node and every other node
Map<Integer, Integer> distances = new HashMap<>();
for(Node<Integer> n : graph.values())
distances.put(n.getIndex(), Integer.MAX_VALUE);
//keeps track of which nodes we have visited
Set<Integer> visited = new HashSet<Integer>();
//declare a priority queue which will be used to help find the shortest path to each node
PriorityQueue<Pair<Node<Integer>, Integer>> queue = new PriorityQueue<>(new EdgePairComparator());
//initially load the priority queue with the start node (as that is where we start!!!)
Pair<Node<Integer>, Integer> startPair = new Pair<>(start, 0);
queue.add(startPair);
//when the queue is empty we will have found the shortest distance from the start node to all other nodes
while(!queue.isEmpty()) {
//the pair at the front of the queue will be the current node to visit
Pair<Node<Integer>, Integer> pair = queue.remove();
Node<Integer> node = pair.getKey();
Integer weight = pair.getValue();
int nodeIndex = node.getIndex();
if(weight < distances.get(nodeIndex)) {
//if a shorter path has been found then update the distance
distances.put(nodeIndex, weight);
}
//visit all the adjacent nodes to the node currently being visited
if(!visited.contains(nodeIndex)) {
visited.add(nodeIndex); //mark off this node so that we don't have to visit it again
Map<Node<Integer>, Integer> edges = node.getEdges();
for(Node<Integer> edgeNode : edges.keySet()) {
int edgeWeight = edges.get(edgeNode);
Pair<Node<Integer>, Integer> edgePair = new Pair<>(edgeNode, weight+edgeWeight);
queue.add(edgePair);
}
}
}
return distances.get(end.getIndex());
}
That concludes the writing of the getShortestPath() function. In the following post we will wrap up the implementation of Dijkstras Algorithm: Implementing Dijkstras Algorithm (Shortest Path) in Java - Part Three.
The priority queue will contain pairs of Nodes and Integers. This pair represents a Node and it’s distance from the start node. To begin with the starting node and it’s distance from the start node (0) is added to the priority queue. While the priority queue is not empty we will do the following:
The use of the priority queue is vital to Dijkstra’s algorithm. It ensures that the node being visited is the closest unvisited node to the start node.
Let’s start implementing Dijkstra’s Algorithm with a class definition:
We need a class level variable (an integer to be precise) to keep track of the current nodeIndex in our graph to allow us to assign each Node it’s own unique ID. This integer needs to be declared private and static to prevent tampering with. This unique ID setup will work for our purposes, but has limitations (ie can only be as big as what can be held in a 32-bit integer).
Now that we’ve declared our nodeIndex variable let’s go ahead and declare a class to represent a node in our graph. This class needs to contain the data associated with each node, the index of the node, and a map containing the nodes linked to the current node as well as their distance (ie the edges). The class declaration looks as follows:
The node class needs class level variables containing the data, edges, and index.
By declaring the edges as a map of nodes to integers it allows us to store the weight for each edge with the node. Add getters and setters for the data, index, and edges.
public U getData() {
return data;
}
public void setData(U data) {
this.data = data;
}
public Map<Node<U>, Integer> getEdges() {
return edges;
}
public void setEdges(Map<Node<U>, Integer> edges) {
this.edges = edges;
}
public int getIndex() {
return index;
}
public void setIndex(int index) {
this.index = index;
}
Ok so the next thing to do is to write a constructor for our Node class. This constructor needs will take in the data as an input. It needs to set the data, initialize the edges map, set the index to the nodeIndex, and increment the nodeIndex (so that each index for each node is unique).
The constructor is as follows:
public Node(U data) {
this.data = data;
edges = new HashMap<Node<U>, Integer>();
this.index = nodeIndex;
nodeIndex += 1;
}
Note that this will result in a unique index each time (as long as the number of nodes remains less then capacity of an integer) as the nodeIndex variable is declared static. It’s time now to add a function that adds an edge to our node. This will input a node as well as an integer weight. Since we are making an undirected graph it will add the edge to our current node as well as the node contained in the edge.
The function definition looks as follows:
Put the node and the weight into our hash map. This will either add a new entry or overwrite a previous entry.
Next add code to ensure that the edge node contains an edge to this node with the same weight as our graph is bidirectional. If the edges node doesn’t contain this node then simply add it.
Else if the edge does contain this node then overwrite it if the weight is different.
The finished function is as follows:
/**
* Add an undirected edge, will replace an already existing edge between the two nodes
*/
public void addEdge(Node<U> node, int weight) {
edges.put(node, weight);
if(!node.getEdges().containsKey(this)) {
node.addEdge(this, weight);
} else {
if(node.getEdges().get(this) != weight) {
node.addEdge(this, weight);
}
}
}
The finished node class is as follows:
private class Node<U> {
private U data;
private Map<Node<U>, Integer> edges;
private int index = -1;
public Node(U data) {
this.data = data;
edges = new HashMap<Node<U>, Integer>();
this.index = nodeIndex;
nodeIndex += 1;
}
public U getData() {
return data;
}
public void setData(U data) {
this.data = data;
}
public Map<Node<U>, Integer> getEdges() {
return edges;
}
public void setEdges(Map<Node<U>, Integer> edges) {
this.edges = edges;
}
public int getIndex() {
return index;
}
public void setIndex(int index) {
this.index = index;
}
/**
* Add an undirected edge, will replace an already existing edge between the two nodes
*/
public void addEdge(Node<U> node, int weight) {
edges.put(node, weight);
if(!node.getEdges().containsKey(this)) {
node.addEdge(this, weight);
} else {
if(node.getEdges().get(this) != weight) {
node.addEdge(this, weight);
}
}
}
}
With the node class completed that makes a great spot to wrap this post up. The implementation of Dijkstra’s algorithm will be continued in the following post: Implementing Dijkstras Algorithm (Shortest Path) in Java - Part Two.
A visual representation of an undirected graph is as follows:
A visual representation of a directed graph is as follows:
Advantages:
Disadvantages:
Starting with the getMaxNodeAndParent() function. This function will input the node to start searching from (root) and the parent to that node (parent). It will return the maximum node (and its parent) that is a child node of the root node.
This function will be fairly simple in implementation there are three cases to worry about:
Here’s how we will handle each case:
The function definition looks as follows:
private Pair<TreeNode<T>, TreeNode<T>> getMaxNodeAndParent(TreeNode<T> parent, TreeNode<T> root) {
}
If the root node is null simply return null.
If there is a node to the right of the root node recursively return the maximum node and parent of the right node.
If we make it past that statement (ie root.right == null) then return a pair consisting of parent and root as root is the maximum node.
The finished getMaxNodeAndParent() function is as follows:
private Pair<TreeNode<T>, TreeNode<T>> getMaxNodeAndParent(TreeNode<T> parent, TreeNode<T> root) {
if(root == null)
return null;
if(root.right != null)
return getMaxNodeAndParent(root, root.right);
return new Pair<TreeNode<T>, TreeNode<T>>(parent, root);
}
Onto the getMinNodeAndParent() function. It is similar to the getMaxNodeAndParent() function except that it returns the minimum node instead of the maximum node.
This function will be similar in implementation there are three cases to worry about:
Here’s how we will handle each case:
The function definition is as follows:
private Pair<TreeNode<T>, TreeNode<T>> getMinNodeAndParent(TreeNode<T> parent, TreeNode<T> root) {
}
Return null if the root node is null.
If root.left is not equal to null recursively get the minimum node.
If root.left is equal to null simply return a pair consisting of the parent and the root node as the root node is the minimum node.
The finished getMinNodeAndParent() function is as follows:
private Pair<TreeNode<T>, TreeNode<T>> getMinNodeAndParent(TreeNode<T> parent, TreeNode<T> root) {
if(root == null)
return null;
if(root.left != null)
return getMinNodeAndParent(root, root.left);
return new Pair<TreeNode<T>, TreeNode<T>>(parent, root);
}
Next let’s implement a function to recursively print out our tree to a StringBuilder. This function will input the node representing the current location in the tree as well as the StringBuilder to be appended to. This function will be later on used in our toString() method. The toStringRecursive() function definition is as follows:
If the node is null simply return null as there is nothing to do.
If the node has a left node (ie node.left != null) we need to print it out first. We’ll do this by calling toStringRecursive() inputted with node.left. After calling that we append “,” to the string builder to make the outputted data look pretty.
Next append the value of node.data to the StringBuilder.
Now we’ll handle the right side of the node. If the node has a right node (ie node.right != null) we need to print it out now (after printing out the left side of the node as well as the current node). First we make sure to append “,” to the StringBuilder to make the outputted data look pretty. Then at last we call toStringRecursive() inputted with node.right.
The finished toStringRecursive() function is as follows:
private void toStringRecursive(TreeNode<T> node, StringBuilder s) {
if(node == null)
return;
if(node.left != null) {
toStringRecursive(node.left, s);
s.append(", ");
}
s.append(node.data);
if(node.right != null) {
s.append(", ");
toStringRecursive(node.right, s);
}
}
With that out of the way we can write a toString() method to return a String representation of our MyBinarySearchTree class. This will print out all the items in the tree out in order from smallest to largest. The implementation of this is fairly straightforward as most of the heavy lifting is already done in the toStringRecursive() function.
We must simply:
So here is the finished toString() function:
public String toString() {
StringBuilder s = new StringBuilder();
s.append("[");
toStringRecursive(rootNode, s);
s.append("]");
return s.toString();
}
The finished MyBinarySearchTree class is as follows:
package com.spartanengineer.datastructures;
import java.util.*;
import javafx.util.Pair;
public class MyBinarySearchTree<T extends Comparable<T>> {
private class TreeNode<U extends Comparable<U>> {
public TreeNode<U> left = null;
public TreeNode<U> right = null;
public U data = null;
public TreeNode(U data) {
this.data = data;
}
}
private TreeNode<T> rootNode = null;
private int size = 0;
public MyBinarySearchTree() {
}
public void insert(T data) {
TreeNode<T> newNode = new TreeNode<T>(data);
if(rootNode == null) {
rootNode = newNode;
} else {
TreeNode<T> parentNode = rootNode;
while(true) {
if(data.compareTo(parentNode.data) <= 0) {
if(parentNode.left == null) {
parentNode.left = newNode;
break;
}
parentNode = parentNode.left;
} else {
if(parentNode.right == null) {
parentNode.right = newNode;
break;
}
parentNode = parentNode.right;
}
}
}
size++;
}
public boolean contains(T data) {
TreeNode<T> node = rootNode;
while(node != null) {
if(data.compareTo(node.data) == 0)
return true;
else if(data.compareTo(node.data) < 0)
node = node.left;
else
node = node.right;
}
return false;
}
private Pair<TreeNode<T>, TreeNode<T>> getMaxNodeAndParent(TreeNode<T> parent, TreeNode<T> root) {
if(root == null)
return null;
if(root.right != null)
return getMaxNodeAndParent(root, root.right);
return new Pair<TreeNode<T>, TreeNode<T>>(parent, root);
}
private Pair<TreeNode<T>, TreeNode<T>> getMinNodeAndParent(TreeNode<T> parent, TreeNode<T> root) {
if(root == null)
return null;
if(root.left != null)
return getMinNodeAndParent(root, root.left);
return new Pair<TreeNode<T>, TreeNode<T>>(parent, root);
}
public boolean remove(T data) {
if(rootNode == null)
return false;
if(data.compareTo(rootNode.data) == 0 && rootNode.left == null && rootNode.right == null) {
size = 0;
rootNode = null;
return true;
}
TreeNode<T> parentNode = null;
TreeNode<T> toDelete = rootNode;
while(toDelete != null) {
if(data.compareTo(toDelete.data) == 0) {
//this is where we remove the node
Pair<TreeNode<T>, TreeNode<T>> pair = getMaxNodeAndParent(toDelete, toDelete.left);
TreeNode<T> toMove = null;
TreeNode<T> toMoveParent = null;
if(pair != null) {
toMoveParent = pair.getKey();
toMove = pair.getValue();
if(toMoveParent.left == toMove)
toMoveParent.left = toMove.left;
else
toMoveParent.right = toMove.left;
} else {
pair = getMinNodeAndParent(toDelete, toDelete.right);
if(pair != null) {
toMoveParent = pair.getKey();
toMove = pair.getValue();
if(toMoveParent.left == toMove)
toMoveParent.left = toMove.right;
else
toMoveParent.right = toMove.right;
}
}
if(toMove != null) {
toMove.left = toDelete.left;
toMove.right = toDelete.right;
}
if(parentNode != null)
if(parentNode.left == toDelete)
parentNode.left = toMove;
else
parentNode.right = toMove;
else
rootNode = toMove;
size--;
return true;
} else if(data.compareTo(toDelete.data) < 0) {
parentNode = toDelete;
toDelete = toDelete.left;
} else {
parentNode = toDelete;
toDelete = toDelete.right;
}
}
return false;
}
private void toStringRecursive(TreeNode<T> node, StringBuilder s) {
if(node == null)
return;
if(node.left != null) {
toStringRecursive(node.left, s);
s.append(", ");
}
s.append(node.data);
if(node.right != null) {
s.append(", ");
toStringRecursive(node.right, s);
}
}
public String toString() {
StringBuilder s = new StringBuilder();
s.append("[");
toStringRecursive(rootNode, s);
s.append("]");
return s.toString();
}
}
This concludes the implementation of MyBinarySearchTree.