OK, well, I’ve finally made some progress on figuring out the best way to make this work. There are a bunch of small things that make things work better. The biggest one is figuring out how to store repos and where to run commands for the development group that is mainly MacOS/Linux but needs some output from a few native Windows applications. So, here are the final notes:
- One of the big issues that I can’t figure out with Google-fu is getting native Windows to deal with Git LFS and/or submodules. First I hit this problem with the native OpenSSH that Windows includes, trying to fix it with
choco install openssh
seemed to help, but git would just hang. I then triedscoop install git-with-openssh
which got at least the latest OpenSSH, but I can’t for the life of me figure out how to run SSH-agent and sshd. The best so far for git seems to be to usechoco install openssh
and the can at least ssh into your Window smachine. - But to really use git effectively, although slow, the best solution seems to be to run WSL2 and then install “real git” and openssh on that side. if you do this then I wasn’t getting the hangs in things like git pull using these fancy git features. This is nice because it minimizes the amount of Powershell script you need to learn and tools like make just work on WSL2. You can for instance download Make for Windows, but then your execution environment is tougher to understand. It is a little confusing to figure out if you are running with Powershell as an executable or what.
- The next problem is where to store the data. It turns out that WSL2 as you might expect has a magic second disk which grows (but doesn’t shrink), so if you store a bunch of stuff in WSL2, then you will lose lots of disk space unless you do a pretty scary diskpart command to harvest things back from the vdisk. The general conclusion is that for Windows applications, you can git clone into the windows files system by
mkdir -p /mnt/c/Users/$USER/ws/git
and then you justgit clone
as usual. This is going to be very slow with git, but very fast with Windows applications. - If you do this then the native Windows applications are going to run fast. The issue is that WSL2 has a different file system and the only way to access it is as a network drive via
\\wsl$
which is a virtual network drive. It will be slow of course doing this, so it is better to have the Windows application run fast. - Final note, if you are doing this, then you probably want the Windows application output to be under source code control or at least versioning. There are two paths for this, if you have inputs that are going to be fed in, then using something like Google Cloud buckets works well, you can do a
scoop install google-cloud-sdk
and thengsutil
is available to you. One of the great command isgsutil -m rsync -d -r gs://_some_google_bucket_ _some_windows_driv
e_ the -d means that you will get the latest version from the cloud and it will overwrite what is locally, so great for input. - Then for things that are outputs, putting that into github is pretty smart. For instance with Unreal Engine, if you git lfs manage their big files like
git lfs track "*.bin"
and then git lfs takes care of the binaries. The main thing to watch is that you should not import big environments to try and then commit them, because of the way Git works, it will keep those environments forever and it clogs up both the git caches and git lfs. If this gets to be a real problem, the best fix seems to be to start a new repo and then you have a fresh history. While you can try to change the depth of submodule cloning, it is easier if you are just starting to start a new repo I think. Don’t be like me, I checked in 50GB of junk maps and environment that would have to hauled around forever 🙂 - And as always this is good time to do a little git compression which removes things you no longer need like orphan branches with the massive one line :
git remote prune origin && git repack && git prune-packed && git reflog expire --expire=1.month.ago && git gc --aggressive
There are some other good things happening like WSLg which let’s you run Linux applications in Windows graphical interface. A lot of this is a little confusing to me, since the question is why you need a Windows kernel, probably the main reason is to run Office I would guess and then you can develop under Windows. That’s similar to what the MacOS has meant for Linux developers, you can run your Mac Apps. It is a little trickier with Window since you can’t really share the same file system, but if you keep your development environment small on the WSL2 side and you put your Windows files over there, it runs pretty well 🙂