Initial commit

This commit is contained in:
Nathan Fisher 2021-02-19 22:43:36 -05:00
commit a9de527a5c
192 changed files with 45573 additions and 0 deletions

6
.gitmodules vendored Normal file
View File

@ -0,0 +1,6 @@
[submodule "hugo-eureka"]
path = hugo-eureka
url = https://github.com/wangchucheng/hugo-eureka.git
[submodule "themes/eureka"]
path = themes/eureka
url = https://github.com/wangchucheng/hugo-eureka.git

2
Makefile Normal file
View File

@ -0,0 +1,2 @@
all:
zola build --output-dir ../site

21
config.toml Normal file
View File

@ -0,0 +1,21 @@
# The URL the site will be built for
base_url = "/"
# Whether to automatically compile all Sass files in the sass directory
compile_sass = true
# Whether to build a search index to be used later on by a JavaScript library
build_search_index = true
taxonomies = [
{ name = "tags", feed = true },
{ name = "categories", feed = false },
]
[markdown]
# Whether to do syntax highlighting
# Theme can be customised by setting the `highlight_theme` variable to a theme supported by Zola
highlight_code = true
[extra]
# Put all your custom variables here

37
content/about.md Normal file
View File

@ -0,0 +1,37 @@
+++
title = "About Hitch Hiker Linux"
date = 2021-02-15
+++
```bash
#!/don't/panic
man 42
```
HitchHiker Linux is a very Unix-like distribution of Linux with a focus on
simplicity, elegance and providing a solid base that the end user can turn into
whatever they see fit.
## Core Principles:
* The default installation should include the bare minimum required software to provide a solid base.
* Complexity should be discouraged in favor of code elegance, security and maintainability.
* End users should not be discouraged from tinkering with their system.
* The distribution should make as few assumptions as possible regarding end use.
* While newer releases of software often eliminate bugs and vulnerabilities, newer software packages are not by default more secure than stable, mature packages (newer is not always better).
* Any changes to the core system functionality, especially those which change expected functionality, must be well justified and well vetted before deployment.
* The base installation should include everything required to rebuild itself from source.
* The distribution should make as few changes to the upstream software as possible, delivering it as intended by the original author.
* Patching of sources should only be done to fix bugs or vulnerabilities.
HHL was born of a desire to harness the greater hardware support of Linux while
paying respect to the Unix systems from which Linux was born. The author was a
long time user of FreeBSD who migrated to Arch for several years, but has become
increasingly frustrated with Systemd, Gnome, RedHat and Ubuntu dominance. It is
believed that there is a need for a distribution that does not pander to ease of
use for casual users at the expense of putting up roadblocks for experienced Unix
veterans.
## Architectures
HHL is running on the following processor architectures:
* x86
* x86_64
* armv7l
* aarch64
* riscv64

7
content/news/_index.md Normal file
View File

@ -0,0 +1,7 @@
+++
title = "Recent posts"
sort_by = "date"
template = "news.html"
paginate_by = 5
page_template = "news-page.html"
+++

View File

@ -0,0 +1,43 @@
+++
title = "Compiling for aarch64/riscv64 from qemu user mode chroot"
date = 2021-01-28
[taxonomies]
tags = ["Arm","Porting","RiscV","Qemu"]
+++
This is a novel approach that I've been experimenting with lately, to do some
legwork towards supporting architectures for which I either don't have physical
hardware or else the hardware is limited in capability so much as to make native
compilation impractical.
<!-- more -->
Qemu, in addition to running full virtual machines, also has what is known as
"user mode emulation". In a nutshell, when using qemu in this way one is able to
run binaries for another architecture without creating virtual hardware or a
virtual kernel. In this case, qemu just acts as a translation layer, translating
instructions designed for one ISA to instructions that can be executed on the
host processor. By not running a virtual kernel on virtual hardware, there is
actually a huge savings of system resources.
Setting this up, while not trivial, is surprisingly straightforward. First, a
suitable root filesystem must be created. Qemu must be compiled statically, and
the static binary copied into the chroot filesystem. Then, by using binfmt-misc,
one can register this qemu executable as the program interpreter for binaries of
the guest architecture. In this way it is actually possible to set up a functional
chroot on a well spec'ed X86_64 machine running Risc-V binaries. There is a bit
more to getting this to work, but suffice to say that it does in fact work,
within the normal limitations of a chroot.
While running foreign binaries does incur some performance penalty, this method
is actually proving to be a significant improvement over, say, compiling for
aarch64 on a Raspberry Pi 4, as I have four times the memory and a significantly
faster processor with twice the number of cores available. And it has the benefit
of not requiring a cross compiler. Instead, one is running a native compiler via
an emulation layer. But this eliminates many of the pain points of cross compiling.
It's a neat trick.
So what does this mean for HitchHiker? It means that work is now well underway
towards running HitchHiker on a BeagleV when they become widely available later
this year. I have been eagerly waiting to dip toes into Risc-V as soon as an
affordable option was available, and it looks like said option is now on the
horizon.

View File

@ -0,0 +1,13 @@
+++
title = "Continued porting of BSD userland against remaining self-hosting"
date = 2020-09-03
[taxonomies]
tags = ["Porting","Roadmap","NonGNU"]
+++
At this juncture we're well over the halfway point in regards to providing a predominately BSD userland in HitchHiker, with over 90 console utilities ported against a total of 59 remaining programs brought in from GNU coreutils. That total includes<!-- more -->, of course, some programs that are not included in GNU coreutils. Some of the utilities replace standalone GNU packages, such as grep and diff, while some are utilities that may not be present on most Linux distributions, such as pax or leave. In general, the BSD utilities have compatible functionality, as both GNU and BSD have mostly embraced the POSIX standard, but with a few caveats where there are extensions made to the standard. The biggest differences reflect a difference in philosophy, as the BSD utils are quite small and include only what they must, while the GNU versions are all (yes all) larger and accept more options. GNU particularly accept long options and in general have better internationalization. The GNU utilities also accept the --help option, giving a brief summary of their function. BSD utilities specifically omit this functionality, as each command comes with a man page, which would duplicate functionality.
One of the dangers in just blindly replacing the usual GNU userland on Linux is that when the accepted options differ slightly, some scripts may not run as expected. This also can affect building packages from source if care is not taken to stick with only POSIX specified options when writing Makefiles. To some extent I have been working to minimize the effects here by taking the time to verify each utility works as expected and in some cases implementing options that most users expect, such as the -v switch (verbose) for the ln and rmdir utilities. In other cases, such as the much abused -a or --preserve-attributes options of GNU cp, implementing equivalent functionality in the BSD port is definitely non-trivial, and many people are just trying to copy directories recursively while preserving symlinks when they use 'cp -a', and not actually wanting to preserve ownership or modification times. Those cases can be served by using BSD cp as 'cp -R'. If preserving attributes is the desired behavior, then this is one of the tradeoffs being made and extra steps will have to be taken with chown and touch.
In order to ensure that HitchHiker always remains self-hosting I have been running bootstrapping the system from itself multiple times during the porting effort. A few issues have been found and fixed along the way, but it has been an overall smooth transition. In one case, I had actually misused 'cp -a' myself in the busybox build (busybox is only built as part of the temporary toolchain).
One particularly special case is glibc, which has a hard dependency on GNU awk during build time. As I have a preference for the historical (yet still maintained) Nawk, this requires building a statically compiled Gawk early in the bootstrap process as part of the temporary tools in /toolchain, which is discarded along with the rest of the temporary tools after the bootstrap phase and does not become part of the base system. Another example of the "Sheer F%$king Hubris" of much GNU software, which often will only build with other GNU tools, even though other tools exist with equivalent (and sometimes superior) functionality.

View File

@ -0,0 +1,50 @@
+++
title = "Cross Build Progress"
date = 2021-02-08
[taxonomies]
tags = ["Programming","Milestones","Porting","RiscV"]
+++
The sysroot/cross-compile build tree is nearly able to build all of HitchHiker
now, with only a few packages left to tweak.
<!-- more -->
* pkg-config: This is messy in general. There is a circular dependency with Glib,
satisfied by the pkg-config maintainers incorporating a recent version of Glib
directly into the distribution. However, Glib requires extra work to cross
compile. I've looked into it but not yet implemented the fixes.
* XML::Parser: Perl was actually able to cross compile successfully to both
aarch64 and riscv64 using perl-cross. XML::Parser is a perl module, uses perl
during it's build and perl modules in general have their own home baked build
system which does not play well with cross compiling. I'm investigating whether
it really needs to be in the base system or can be moved to ports.
* Ninja and Meson: may not actually be needed in the base system?
* zstd: Uses a Makefile or Meson for it's build system. I've already dealt with
a few projects that built with a plain Makefile and was just putting this off to
be honest, as a few of the others required significant reworking to get a
functional build.
* iproute2: Another one using it's own home grown configure script which will
likely require much manual intervention.
* mandoc: This builds after a fair amount of manual intervention. However, it
segfaults on riscv64. Unknown whether this is due to qemu or if it will segfault
on real hardware, but not a great sign. I'm looking at porting an earlier man
utility from 4.4BSD-lite. Man should be a simple program.
* file: Our file utility, descended from the relatively recent OpenBSD sandboxed
file, built fine on intel and Arm architectures, but the original author pulled
in some SECCOMP defs that are architecture dependent and failed to account for
any architectures besides Intel and Arm. I've hacked the source so that it
compiles on Riscv64. However, there appears to be more work to be done to get it
fully working on that arch.
There were, of course, numerous hiccups along the way. I'll elaborate on a few.
Bzip2 has it's own Makefile based build. It also has a separate Makefile to build the shared version of the library, and even when you do build the shared version it uses the static version for bzip2recover. In true hitchhiker fashion after mucking about with what was already there for a while I wound up just discarding it and imported the source directly into the build tree, substitution our own infrastructure. This required creating hhl.sharedlib.mk, which was overdue anyway. At the moment we're building both shared and static versions of libbz2, and linking the bzip2 and bzip2recover binaries against the shared library.
Recent versions of util-linux include a new utility, hardlink, which links against pcre2 if available. However, the build system detects pcre2 on the host system, not in the sysroot, and there is no provision to manually disable linking against pcre2. Interestingly, there are two competing implementations of the hardlink program, and the one currently in util-linux is potentially going to be replaced as it is not the more fully featured one. There is some interesting discussion of the issue on the mailing list. In the meantime, since it is a potentially useful utility, I just imported the source of the preferred version and am building it using our build infrustructure.
And now, gettext. This one really pissed me off. Recent versions of gettext include libtextstyle, a library for formatting text for display. The included gettext utilities link against libtextstyle. However, the headers don't yet exist at the time the utilities are built, so the only reason the build is not failing for everyone is that they are building against the headers already present on the host system. The libtextstyle subdirectory is not even in the library search path on the command line.
Now, this is just bad practice. It means that if those headers are updated for a new release, then distro packagers will be building against the old headers, potentially causing undefined behavior and random bugs. I only caught it because the build utterly fails in a sysroot environment.
Furthermore, one of the missing headers, included by textstyle.h, is stdbool.h, because for some reason they're worried about building on systems that don't have a working boolean implementation. This is just crazy at this point. It's no wonder GNU projects get bloated.
In the end, I pulled the headers out of the host system, put them in the appropriate directory inside the source directory, compiled, then did a chroot into the sysroot and did a native compile just to be sure that the headers matched. Those headers are now imported into the build tree and are placed into the source directory after running configure, and the Makefile is patched to search the build directory before the system header directory. It's not a full fix that I would push upstream but it works. A full fix would build the headers first, before building the utilities, then link against them. I didn't feel like going that in depth fixing their build.
In a general sense, even though I feel that autotools based builds are often extremely bloated and inefficient, based on the mistakes that I keep fixing in other project's home brewed Makefiles to enable cross compilation (as well as simple things like installing into a DESTDIR) I can really see some of the strengths of autotools. Most of the time, all that is needed to cross compile is setting --host and either passing --sysroot in the CFLAGS or, if supported the the specific package, passing --with-sysroot to configure. That isn't to say that autotools is perfect. Frequently the package maintainers compile tests against the host system or check for library support by looking in /usr/include. After a while you begin to realize that you're literally fixing the same mistake repeatedly, and start to feel despair for the human race. And then it works, and everything is all right again.

View File

@ -0,0 +1,25 @@
+++
title = "Editors, glibc-2.32"
date = 2020-08-28
[taxonomies]
tags = ["Porting","Deprecation","Glibc","Libraries","NonGNU","4.4BSDlite","Editors"]
+++
Most people who use any form of Unix have a favorite editor. Most of them also think that everyone else is wrong<!-- more --> (but in truth, you're all wrong...). And most Linux distros over the years have gradually catered more and more to the lowest denominator.
In the beginning, of course, there was only ed (The **STANDARD** editor!) and then at some point in time came Bill Joy's vi. Most people have, of course, never used the original vi. If you are on Linux, chances are good that you have encountered vim. If you are running BSD in any form then you have vi, except that you don't actually have vi, you have nvi. When 4.4BSDlite was released as the first large scale open source project, the major caveat was that much of the source was patent encumbered by AT&T, including the venerable vi editor. All of this source was therefore replaced, painstakingly, a piece at a time. The selling point of nvi at the time was that it was a "bug for bug compatible" rewrite of vi, which started with the original elvis (a vi clone) source.
For most of it's existence so far (including it's original, never released to the public form from roughly ten years ago) HitchHiker had vim. I've used vim a lot and over the years have only grown more fond of it. Sticking with it over the rather steep learning curve has been quite helpful in situations where the only editor available is some form of vi or vi clone. However, vim is not without it's issues. All forms of vi or clones of vi have a rather steep learning curve for new users. The vim source code is a rats nest of conditionals and attempts to maintain compatibility with ancient systems that only a masochist or Luddite would use. And for a console based editor it is rather slow (though nowhere near the beast that Emacs has become, a story for another time).
For ease of use and welcoming less experienced users we follow the same path as FreeBSD by offering the ee editor. It's extremely small (the entire source is a single c file) and has great ease of use due to the integrated menu bar. It isn't an especially powerful editor, but it gets the job done.
As for vim, I decided a while back that it was going to be replaced with something more lightweight and with a more "correct" code base. I tried a few times to port nvi to HitchHiker, both from the "official" nvi source distribution and also from BSD sources. The main stumbling block has been that the code assumes headers and C library routines which no longer exist in Linux.
For the solution, let's back up again by a few years. AT&T Unix is the direct ancestor to Solaris. A number of years ago Sun bequeathed the source code for the Solaris operating system to the public in the form of a CDDL licensed source distribution, giving birth to the OpenSolaris operating system in it's many forms, allowing FreeBSD to import the zfs filesystem, and effectively unencumbering historical Unix source code to the benefit of all.
Therefore, after a few days tinkering, we now have version 3.7 of Bill Joy's original ex/vi editor, which is the version that was current as of the 4.4BSDlite release, ported to and running on HitchHiker Linux. This has been a very gratifying experience overall. Some (probably many if I'm being honest) will miss certain features that they have grown used to in vim, such as syntax highlighting and visual mode. There is no plugin system (which I've never bothered to use in vim anyway...) and you're basically expected to know what you're doing. But the program is SMALL (roughly half the size of nvi, with no dependency on BDB, and orders of magnitude smaller than vim) and noticeably faster than vim when loading large files or pasting large chunks of text. You probably thought vim was fast. But you only -thought- it was fast.
You are, of course, free to install and use vim if you feel that original vi is too "limiting" for your use. Feel free to use ed for that matter, and to yell out that you're using "THE STANDARD EDITOR" while doing so. We imported ed from FreeBSD a week ago.
In other news, GNU libc (glibc) released version 2.32 this week. I'm running it already on my Arch systems but for the time being holding off on updating HitchHiker. There were a few notable deprecations in this release for functions, macros, and even a few array definitions that are considered too ancient to support. However, they actually do affect HitchHiker in a few places as we have some rather "historical" code imported into the userland tools. I spent an hour this morning fixing compilation errors when compiling mksh and ex/vi against the new library, in preparation for eventually merging it in. I'll have to check all of the other imported sources against it another day.
Interestingly, I also had to start using clang to compile these tools in Arch, as the gcc 10 series encountered an "internal compiler error", which was actually a segmentation fault, when compiling the ex/vi sources. In HitchHiker we use gcc-9.30 and these problems don't exist. Good reason to not upgrade for the time being.

View File

@ -0,0 +1,33 @@
+++
title = "Experimental clang/llvm based build with nightly rust"
date = 2020-11-24
[taxonomies]
tags = ["C Programming","Rust","llvm","Clang","NonGNU","Packages"]
+++
Ok, so this little side track has been on a slow burn for a while now, but I've currently got a rootfs that I can chroot into with the following features:
* Built almost entirely with llvm/clang (see caveats below)
* Rust nightly compiler installed early in the build, accessible for all users
<!-- more -->
* Directory hierarchy flattened, directories in /usr are symlinks to directories in /
* libc++ standard C++ library from the llvm project instead of libstdc++ from gcc
* Both GNU binutils and the lld linker installed (default is still binutils ld for now)
* x86_64 only at this time, but should be portable to Arm
Some caveats:
* llvm/clang wants to build against libstdc++, so not self hosting unless we go back to libstdc++
* There were issues building the compiler-rt module, so llvm and clang link against libgcc
* The GNU m4 package will not build with clang, so it had to be built with the gcc compiler from the temporary toolchain. I'm sure this is fixable, but I haven't researched it much yet.
* The biggie: glibc refuses to try to build with any compiler except gcc. The. Sheer. Fucking. Hubris. ...again...
Nevertheless, this is all quite promising and brings up a number of possibilities going forward. As llvm by default supports multiple backends for code generation, a huge benefit is that HitchHiker would come out of the box with cross compilation ability. Also, including rust by default gives us the ability to use Rust code in the base system, with numerous potential benefits that I will explain below.
The directory hierarchy is simple to understand, but is reversed in implementation from what the "big" distros have done. Basically most "modern" distros have ditched the /usr split by making /bin and /lib into symlinks to /usr/bin and /usr/lib. In this build we instead just give all programs a blank install prefix. Our root directory now takes not only all of our binaries and libraries but also has program data under /share, and we only have /usr because it is expected. everything in /usr is, in fact, just symlinks.
Rust is installed in a novel way using rustup. As root, we set the RUSTUP_HOME and CARGO_HOME environment variables to /rust, and make /rust/bin a symlink to /bin. That way, we can track a nightly toolchain system-wide and it will install binaries into /bin, for all users to access. We could, of course, build rustc from source, and I have in fact done so previously. However, apart from the ridiculous compilation time that it takes, this is not a great option until the language is truly stable, as a lot of the interesting bits if the Rust ecosystem need something newer than the stable branch. By using rustup and the binary toolchains, we always have access to the most up to date toolchain, and the ability to easily switch from one to another.
So what are the benefits of having rust available early on? For starters Rust can access C functions and export functions to C, making it relatively easy to replace parts of the software stack previously written in C with Rust code that enjoys the memory and concurrency safety checks which are not present in C. As an example, relibc is a complete C standard library written in Rust. This alternative C library is a potential future path of investigation. But on a smaller scale, there are a number of interesting cli utilities and programs written in Rust.
* sd is a stream editor similar to sed, but much faster and with a much easier syntax and reduced scope.
* fd is a file finding utility which is orders of magnitude faster than find, and has interesting features like skipping .git folders by default.
* exa is a file lister with some unique features over ls
* The amp editor, while somewhat vi-like, has interesting extra features fuzzy file finder and a token based jump which make moving around even faster than vim.
I have also begun, partially as a learning exercise, writing my own Rust implementations of various small Unix utilities. In the future some may find their way into HitchHiker if we retain Rust in the base system.
While this is still in an early evaluation phase, I think it likely that at the very least we will make the switch from gcc to clang/llvm at some future point, with the possibility of bringing some of the other features along as well. What may not make the cut is libstdc++, as replacing the C++ standard library is almost as problematic as replacing the C standard library, leading to the need to port a large amount of third party software in the future. However, I will likely be going further with Rust, and with the flattened filesystem, as I want HItchHiker to be a cutting edge distro while still adhering to traditional Unix practices. Basically, we're going to push the envelope when it makes sense to do so, but not follow fads and trends in the Linux community that might very well be dead ends (like I believe Systemd in particular to be).

View File

@ -0,0 +1,49 @@
+++
title = "Goodbye GNU coreutils, importing sed"
date = 2020-09-22
[taxonomies]
tags = ["Milestones","Porting","NonGNU","Utilities"]
+++
The porting effort has reached the point where it is now safe to remove GNU
coreutils. At this point, there are now only 16 utilities provided by the
coreutils package<!-- more --> that are not implemented in another way in HitchHiker, and none of the missing
utilities are dealbreakers. As follows, here is what is missing and why it isn't
a problem, or what the roadmap is for replacement.
* b2sum - computes BLAKE2 checksums - there are quite a number of checksum utilities already, and this one is currently not a dealbreaker. If a reason for replacement is discovered it will be implemented from scratch in c.
* base32 - encodes file or stdin using base32 - we already have base64 imported from OpenBSD.
* basenc - encodes file or stdin using various algorithms - see base32.
* chcon - changes SELinux security context - SELinux is a largely Fedora/RedHat backed API which we don't implement currently in HitchHiker. Standard Unix file permissions are not fine-grained enough for just about any use case.
* dir - displays directory contents - despite what GNU say about dir, it is basically ls but with a different default behavior. The same behavior can be emulated by using the appropriate flags with ls.
* dircolors - sets the colors for eg ```ls --color``` - We have BSD ls in HitchHiker, which does not implement the --color option. I have considered re-implementing ls and including the --color option, in which case dircolors would be useful. However, this is non-trivial and no timeframe is given.
* hostid - displays the unique numeric id for the current host - not currently a dealbreaker, but potentially useful. Fairly trivial to implement in C.
* nproc - gets the number of processors currently available - also potentially useful and fairly trivial to implement in C.
* numfmt - format numbers in human-readable form - potentially useful, not a dealbreaker. Non-POSIX utility which accepts mostly GNU style long options. If implemented would be done in a different way, ignoring the behavior of the GNU utility, which is frankly too GNU-centric (long options should have short option equivalents).
* pinky - a mini finger implementation - all information available via pinky can be optained with other included utilities. If there is a need for accessing information on remote hosts an actual finger implementation is required anyway.
* ptx - honestly the manpage for this utility reads like gibberish. Anyway I've never used it and don't think it's needed, not available on BSD systems anyway and not POSIX.
* runcon - run a command with a different SELinux security context - see chcon.
* shuf - shuffles file contents randomly - not particularly useful.
* stdbuf - run command with modified IO buffering - exists in FreeBSD but not NetBSD or OpenBSD. Non-POSIX. Potentially useful, not considered a priority. Porting from FreeBSD is non-trivial if memory serves from my 1st attempt.
* timeout - run a command with a time limit - Not available on BSD systems, Non-POSIX. Potentially useful but not a priority.
* vdir - another permutation of ls (see dir) - again just use ls, why have another binary?
None of the missing utilities are POSIX specified and are not likely to be called from any kind of portable shell script. With the exception of stdbuf, I could not find alternative implementations for any of the missing utilities. This was a long process fraught with a fair bit of trial and error, and continual testing to ensure that the system could still bootstrap iself as utilities were replaced a few at a time. My first tries removing the coreutils package entirely resulted in various failures, as either a utility had incompatible behavior with one or more packages build systems, or a missing utility was actually required.
A good example of a surprise was with the 'od' utility, which creates an 'octal dump' of file contents. I did not assume this to be a utility that would see much use. However, the build system for busybox assumes it's existence and fails without it. On looking for a replacement I turned to the [Heirloom Toolchest](http://heirloom.sourceforge.net/tools.html), a collection of utilities derived from ancient Unix sources released by Caldera and Sun. This utility has the dubious distinction of influencing one of the more glaring inconsistencies in bourne shell syntax. For most control flow structures in sh, the closing keyword is the opening keyword reversed, IE if-else-fi or case-esac. However, od was already in existence before the original bourne shell was written, precluding the use of do-od for looping and giving history instead do-done.
While in there poking around in the heirloom sources I also ported over pg (an early pager), sum (yet another checksum utility) and factor (prints all prime factors of the given number). The sources have been variously tweaked to be more inline with my own coding style, making future maintenance easier. All but pg were commands which are present in coreutils, giving us greater coverage.
During the porting effort, there were various utilities which were either not present in BSD, sbase or heirloom or else so trivial as to pose no barrier to implementing them from scratch. Here is that list.
* /bin/true - just return true and exit - implemented as a single line shell script.
* /bin/false - the reverse of true.
* /sbin/nologin - used for disabled logins, displays a message that the account is disabled and exits with a failure code - implemented in C.
* /usr/bin/link - creates hard links - implemented in C.
* /usr/bin/unlink - calls the unlink function to remove files - implemented in C.
* /usr/bin/shred
The shred utility was a special case, as it existed only in GNU coreutils until my own implementation. I've always thought that this was a tremendously useful utility and wanted it in HitchHiker. What it does is overwrites the given file with random data to make recovery exceedingly difficult, optionally doing a final pass with zeros and unlinking the file. It can be used on an entire block device to wipe a hard drive clean. I've been working on dogfooding myself in C lately and this was a good opportunity to code something a little but less trivial. Anyway, this version of the shred command is implemented cleanly from scratch but follows fairly closely the behavior of the GNU version. It differs in not accepting long options, much like the rest of our collection of utilities, and does not implement a few switches that are considered unnecessary in use. At some point I intend to go in and implement file-renaming before deletion, akin to the GNU versions "wipesync" method, but the program is otherwise feature complete. In order to give myself another challenge, I also implemented a progressbar that is displayed with the -v option.
## On to sed
I have tried in vain to get away from using GNU sed in HitchHiker. After porting BSD sed, sbase sed and GNU minised, all of them proved unsuitable and caused build failures at some point or another. Or rather, non-portable sed usage by the authors of the various build systems caused failure when our sed did not support the input that it was given. The final deal-breaker was the Linux kernel itself, which requires sed to understand some GNU specific regular expressions during the build.
What I have done is import the GNU sed source code directly into the HitchHiker source tree and made it work with our build system. This is only the second GNU licensed program I have done so with (the first being less) and only the third time that I have successfully stripped a program of it's autotooled build system. It was, to say the least, not trivial. However, the results are rather impressive, as on my 8-core laptop sed now builds in 3.22 seconds, compared with 31.75 seconds for the autotooled build. I'll take a 10x speed increase any day, and it's a great example of how much there is to gain by just using make compared with an autotools build. Indeed, for a smaller program like sed, most of the time is taken by the configure script, and by installing files after compilation. With our "compile in place" method we're skipping entirely the installation phase.
As GNU sed is fully localized into multiple languages, I took the time to extend the build infrastructure with hhl.locale.mk, which takes any .po files in a program's locale subdirectory (if they exist) and "compiles" them into .mo files in /usr/share/locale using the msgfmt utility. With that in place the infrastructure for building directly from source, rather than just wrapping another build system in make, is pretty much complete, even if there are still things to optimise.

View File

@ -0,0 +1,46 @@
+++
title = "Imported source, pkgsrc on Arm woes, and make includes"
date = 2020-08-01
[taxonomies]
tags = ["Arm","GNU Make","Pkgsrc","Porting"]
+++
I think we'll start this post discussing some of the differences and misconceptions of same between BSD make and GNU make. In the case of BSD make we're primarily referring to NetBSD make, which is a greatly improved version of the historical Unix make<!-- more --> utility, and has also been imported into FreeBSD. Among the BSD camp, GNU make is seen as a second citizen. Their version of make is leveraged for building the entire system, without using autotools except in the case of imported external source which has been autotooled by it's authors. It is also the utility that wraps all of the functionality of FreeBSD ports and NetBSD pkgsrc - downloading, patching, compiling and packaging third party software. The complete infrastructure is quite impressive.
Much of this functionality lies in leveraging included Makefiles, much in the way that C headers can be included to pull in function definitions, and also similar to the concept of shared libraries. These included .mk files, usually located in /usr/share/mk on a BSD system, allow for extremely concise Makefiles throughout the source tree to just specify a few variables and then pull in all of the targets from an inclued file. The most prominent are probably bsd.prog.mk and bsd.lib.mk, which tell make how to build C programs and C libraries respectively.
It is a common misconception that this functionality only exists in BSD make. GNU make does have this capability as well and I leverage it quite heavily in HitchHiker. Admittedly, it is more developed in BSD make, but the limitations of GNU make in comparison are fairly trivial and easy to work around. One of the sillier things is the default search path that GNU make uses the look for included Makefiles. By default this path starts in the current directory and proceeds through /usr/local/include, /usr/include and finally /usr/gnu/include. Leveraging this would sprinkle .mk files through what are by convention the location of C header files, or alternatively using /usr/gnu/include. I have yet to see a system that has a /usr/gnu directory (maybe it was planned for Hurd?) and I'm not keen on it. As of the last commit, HitchHiker now patches make to no longer search /usr/gnu/include and to now search instead /usr/local/include/mk and /usr/include/mk, the latter being our new default directory. This allows us to include a .mk file by name, without specifying a full or relative path, and at the same time keeps all of our .mk files separated from the system C headers.
So what exactly does BSD make have over GNU make?
* Slightly more powerful conditionals. In addition to ifdef, ifndef, ifeq and ifneq BSD make also has a simple if, and allows chaining conditionals with the && and || operators much like shell syntax. By comparison GNU make must nest conditionals for similar functionality.
* Included Makefiles can be relative to the install or source prefix. This is quite powerful for building large projects in custom site locations.
* Possibly others I have not discovered yet, however in practice GNU make can do -everything- that BSD make does if you take the time to learn how to use it.
As part of creating a more integrated build framework, my most recent commit contains a basic analog for bsd.prog.mk, hhl.cprog.mk (cprog due to my not assuming that all programs are written in C!) which now allows extremely concise Makefiles for source code that has been imported into the base system. In the case of rpcgen the following Makefile is all that is needed:
```Makefile
# Makefile - hhl - /usr/src/world/rpcgen
# Copyright 2020 Nathan Fisher <nfisher.sr@gmail.com>
#
progname = rpcgen
cflags += -g
include hhl.cprog.mk
```
As can be seen, there are really only three lines. Lines 1-3 are comments, line 4 specifies the program name, line 5 adds -g to the compiler flags, and line 6 includes hhl.cprog.mk. If you look inside the rpcgen subdirectory of the build tree you will see the following structure:
```
.
├── Makefile
├── man
│ └── rpcgen.1
├── README
└── src
├── config.h
├── proto.h
├── rpc_clntout.c
├──***snip
```
Our included Makefile will generate object files for any .c files in the src/ subdirectory, link those objects into an executable and install man/rpcgen.1 in ${install_prefix}/share/man/man1. It has logic to differentiate man1 from man8 and also installs html and text docs in the doc/ subdirectory. If we needed to specify additional cppflags, cflags, ldflags or libs to link with we would just append them to those vars (by convention I use lower case so as to not interfere with any CFLAGS etc. specified on the command line or in an external build system). If we want to create links to our executable they just need to be specified in the binlinks variable. Defining the variable finish and writing a finish target allows us to do anything else additional that might need done, like installing configuration files or runtime data. In short, while being a fairly tidy 102 lines (some of which is whitespace and comments) hhl.cprog.mk successfully works as an analog for bsd.prog.mk.
The short list of programs leveraging this framework now include the ee text editor, rpcgen, mksh and lksh shells and the pax archive/copy/link utility. At some later date the source tree will be reorganized to move these directories into bin or usr.bin much like is done in the BSD source trees, as well as importing other sources directly into the tree and "HitchHikerizing" their build systems.
The title promised some pkgsrc updates relating to arm challenges. In a nutshell pkgsrc seems to be broken in several places for arm on Linux. Perhaps most annoyingly is the buildlink system, which is supposed to make it simpler for packages to find the libs that they are to link against by moving everything into work/.buildlink/lib (relative to the subdirectory that bmake was run from). So far I've discovered that libuuid, libexpat, libstdc++, libacl and libattr libtool archives are skipped every time on arm for reasons that I have not yet discovered. I can work around it by manually linking the .la files in from /usr/lib, but this requires an annoying level of manual intervention. Nevertheless I have managed to build the full LXDE, XFCE and Mate desktops, several shells, emacs, nano and pico editors, Inkscape and am on to building Gimp presently on my RPI. On x86_64 I have all of that plus a fairly complete server stack, Abiword, Gnumeric and the Firefox browser built.
In short, when q3 releases (and it really is coming soon) there will be a fairly comprehensive selection of software available on both architectures via pkgsrc. However, in the long run we're going to revisit the idea of creating our own ports tree which leverages GNU make and the infrastructure of .mk files that now reside in /usr/include/mk, adding funcionality for creating binary package tarballs and downloading and installing them. This will most likely involve a simple homebrewed package format that takes some inspiration from BSD tarballs, Slackware tarballs and Arch Linux tarballs. A sneak peak in the current build tree at how we register the installation of the base system will give you an idea. We're going to keep things purposely simple and avoid writing a full fledged package manager. More details to come.

View File

@ -0,0 +1,12 @@
+++
title = "Improved porting with libbsd"
date = 2020-08-24
[taxonomies]
tags = ["Porting","Roadmap","Libraries","NonGNU","C programming"]
+++
The libbsd project was begun originally by the developers of the Debian K-FreeBSD project as a means of addressing differences in the available functions, headers and c macros between BSD libc and GNU libc. I have imported the source<!-- more --> directly into HitchHiker and "hitchhikerized" the build process. Only the static library archives are built at this time. With a few tweaks and an additional "compat.h" header to address a few omissions, this has led to an improvement in my process of porting BSD utilities, and the vast majority of the imported utilities now link with libbsd functions. There is little measurable overhead in the resulting binaries.
For those utilities that are more difficult to port I have begun porting the work already done to create lobase, a portable package of OpenBSD userland, to the hitchhiker build system. I am also pulling in sources from FreeBSD now. The first program ported from FreeBSD to HHL is the ed editor, which in this case was easier to get going than the version from the NetBSD sources.
I have always enjoyed the venerable Unix fortune program, and always include a call to it in my shell profile. While I did consider the idea of including it by default in the HitchHiker base system, I decided to go another route and just implement something simpler from scratch. Hence, the small C program 42, which by just prints a random quote from the HitchHiker's guide universe. It accepts no options and can only be extended by editing the source. Just a bit of fun.
![screenshot_42.png](../../assets/screenshot_42.png)

View File

@ -0,0 +1,25 @@
+++
title = "More Pkgsrc"
date = 2020-07-14
[taxonomies]
tags = ["NonGNU","Packages","Pkgsrc"]
+++
While attempting to build the Thunar file manager (via pkgsrc) I encountered a
couple of other errors when it pulled in Samba4 as a dependency. Samba4 has
switched from using GNU autotools to it's own internal version of waf, a Python
based tool for configuring software.<!-- more --> Waf takes most options that one
might pass to a GNU configure script. The difference is that an autotools
configure script will generally either ignore an option that it doesn't recognize
or else complain about it but still continue running. In the case of waf, an
unknown option causes an error, which aborts the build. In the case of samba4
this was ```--gmp-include```.
About an hour later after failing to find where that option was being included from, in desperation I just did a grep through every Makefile in pkgsrc for "gmp-include" and found a few candidates. After temporarily commenting out the ```CONFIGURE_ARGS+=``` lines in devel/gmp/builtin.mk the samba4 build was up and running again. Note that I'm still unsure where it's getting pulled in, as I traced the Makefile includes through a few levels before giving up.
So, samba4 was building, then it wasn't again. This time the build failed due to the lack of the rpcgen utility, which is included in NetBSD (and indeed FreeBSD and OpenBSD and pretty much every BSD variant) but not in Linux. Linux has libtirpc and rpcgen is widely considered obsolete, and in fact was abandoned by it's chief developers, Sun, some time ago. However since we're building using pkgsrc it's assumed to be on our system. What's more frustrating is that there is no package for it in pkgsrc. This has already been noted on the mailing list as it causes more than one package build failure on Linux.
As it looks like going with pkgsrc is going to expect us to have rpcgen in several places, I set about trying to find a portable version of the source code. This turned out to be quite more of a chore than expected, but I did find an older version that had been pulled into a more or less portable state. I've imported that source into the HitchHiker build tree and rewrote the Makefile to suit our purposes. After adding it to our system and restarting the samba4 build (with "--gmp-include" still temporarily excised) samba4 builds successfully, and Thunar was able to build successfully as well.
This whole exercise is a good illustration of the challenges in making any software project truly portable, and why a more efficient build can be achieved by ignoring other systems and targeting the build only to the system actually in use. It's why I've imported the sources for ee, mksh and now rpcgen directly into the source tree and we're building them without any configure checks, as it's already known exactly what's on our running system.
Another thing this illustrates is some of the problems that arise due to projects switching their build system to one of the dozens of projects that have sprung up to "replace" GNU autotools. The fact is that very few, if any, of the replacements have achieved anything close to the flexibility and maturity of autotools and most probably never will. I'm not a fan of autotools by any means. It's a quagmire that most people don't understand; a lot of devs simply cut and paste from other projects and hope for the best. The Makefiles it generates are of a complexity and length that are ridiculous. However, the fragmentation that is currently occuring as some projects use cmake, waf, ninja, meson, jigdo, scons or whatever the next kid on the block is named is a true nightmare for anyone trying to build and maintain a full distro. That's not even accounting for the various homebrewed shell scripts that I encounter which have varying levels of POSIX compliance and make their own assumptions about the utilities present on your system.

View File

@ -0,0 +1,43 @@
+++
title = "On Libc, rust experiments, and Risc-V"
date = 2020-11-15
[taxonomies]
tags = ["C programming","Rust","NonGNU","RiscV","Glibc","Utilities"]
+++
## LibC
Recently I have continued the exercise of re-implementing certain base system utilities in C, uncovering more and more differences between BSD and GNU libc implementations. At this point it's really easy to say that I have a strong preference for the BSD extensions to the libc standard as opposed to the GNU extensions. The Berkeley programmers, when they added features, added truly useful features.
Take random number generation as an example. It is fairly common when programming on Linux to open and read from /dev/urandom. BSD, in the meantime, offers a group of functions in libc (arc4random) that combine pseudo-random number generation with cryptography to produce pseudo-random data that is truly difficult to guess, with a low CPU overhead.
Another small place where I found the BSD libc to offer superior functionality was when I was writing my own version of the *mktemp* utility. The BSD versions of the mkstemp and mkdtemp functions allow one to give a template with a variable-length suffix, to give a choice between quick (with a smaller suffix) or as secure as you want to be. By increasing the suffix length, one can exponentially increase the number of possible filenames, making temp files a more difficult vector of attack.
The GNU (and musl actually) versions of these functions only allow a six character random suffix, period. In fact, my earlier port of mktemp from BSD did not function as expected in this regard, and the GNU coreutils programmers had to roll their own functions to get equivalent functionality to the BSD counterparts. I have elected to not abstract this deficiency away in my code, and the HHL version of the mktemp utility uses the available versions of mkstemp and mkdtemp, with a six character suffix. I would rather maintain the simplicity of the program than to take a risk of introducing bugs by writing my own functions here.
In any event, the work that I have been doing here makes a strong case for a future port of musl, with the additions of some BSD functions ported to work with musl, as our base C library. As attractive as BSD libc is, a direct port is unrealistic as so much of it is tied to kernel interfaces that differ on Linux. I just find it a shame that Linux users are mostly saddled with such a piece of crap (bloated while still being less functional) in such an important position in the software stack.
## Rust
There are a lot of programming languages, and they tend to come and go regularly. As programming languages go, it's a safe bet that C, C++, Java and Python will be sticking around. Of the less pervasive languages Go and Rust seem to be well ahead of the pack as pertaining to mass adoption. Go is interesting, largely because of what it leaves out. Go is simple. Go avoids the feature creep that seems to be inevitable in all modern software. However, Go achieves memory safety by using garbage collection. I want to like Go, but that one anti-feature prevents me from even trying it.
Rust, on the other hand, has feature upon feature to the point where I really am hoping that they ever finish the language someday. That said, I'm intrigued by Rust for a number of reasons. I have been toying with learning the language for a while, doing some simple learning exercises and reading the Rust Book. I recently decided to just dive in and start writing my own programs using Rust, as that seems to be the only way to truly learn a language. My first full Rust program is a clone of a C program that I wrote for generating sine lookup tables, useful in DSP applications. The C version is perfectly fine, but I basically wanted to try porting a known working program and see how well the control flow translated.
The process was not completely smooth, and it took quite a few iterations to get to the point where the program compiled and actually functioned. However, the documentation is excellent and provided solutions to most of my issues. The other thing that really impressed me was the compiler messages. Often, the compiler will tell you literally the exact change needed to make your program compile. Coming from C, this is a breath of fresh air. And when we're talking about Rust, chances are that if it compiles it's also going to function exactly as expected.
Rust is also elegant in a way that C never will be. Consider the following C code:
```C
for ( i = 0; i < 10, i++) {
// some code
}
```
And the equivalent Rust:
```Rust
for i in 1..10 {
// some code
}
```
One thing that requires a different mindset when one moves from an old-school language like C, to a modern language like Rust, is that in C et al, the standard library tries to be all inclusive. Programmers (at least the good ones) try to avoid adding dependencies to anything other than the standard library. Rust, however, has Cargo. Cargo is, in effect, a built in package manager. This is not unheard of; Python has pip and has for years. In Rust, however, the standard library is quite small and everyone pretty much freely uses cargo to access functionality that is not included in libstd. In my case I used cargo to pull in the excellent clap crate, which parses command line arguments and also provides built in documentation. Compared to getopt in C, clap both does more and does it with little effort.
At this point int time, however, Rust is not going to be landing in the HitchHiker base system, as the only viable Rust compiler uses llvm as a backend. At some point in the future this may well change anyway, as I have been growing increasingly frustrated with both Glibc and gcc (a brief skim of some earlier blog posts will quickly turn up examples). The obvious replacements are musl and the clang compiler, which, being based on llvm, would make the further inclusion of rustc trivial.
## Risc-V
I do a fair amount of electronics experimenting and often employ various microcontrollers in my projects. Some are Atmega based, while I have more recently fallen in love with the ATSAMD51 Arm Cortex M4 processor based boards from Adafruit. Just this week I have acquired something new to me: a Risc-V based MCU from Seeed Studio, the Sipeed Longan Nano. It's a brilliant chip, although quite lacking in documentation and example code. I'll be playing with it a lot.
The Nano, however, is a microcontroller, not a full SOC that is capable of running Linux. Right now there are few options for developers wanting to experiment with Risc-V with a full operating system, and the best ones are all prohibitively expensive for their capability. Due out next year, however, is the PicoRio board from Rios labs that is described as close in form factor and capabilities to most Arm boards based around the RPI format. Provided this board is not vaporware and actually does materialize at a reasonable cost, I intend for HItchHiker Linux to be running on it as soon as feasible. I intend to make HitchHiker one of the very first options independent of distros such as Debian and Fedora, and to make Risc-V a first class citizen. It is my hope that in doing so HitchHiker might help to push along the adoption of truly open hardware on which to run open software.

View File

@ -0,0 +1,43 @@
+++
title = "Patching pkgsrc, building rust and Arm headaches"
date = 2020-07-20
[taxonomies]
tags = ["Arm","Rust","Systemd","Libraries","Pkgsrc"]
+++
Having been working with pkgsrc for a couple weeks now there's a bit of a flow
developing, and quite a bit of progress, with a big caveat. The issues I'm
running into are that there are quite a few more build failures for Arm than
there are for x86_64.<!-- more -->
Interestingly, both libdrm and MesaLib fail with similar messages about assembly
code that isn't supported on the current architecture (armv7l). In situations
like this it's hard to blame pkgsrc, as the fault is upstream from there. I'm
getting quite used to adding packages to my local tree and tweaking BSD style
Makefiles. I'm making every attempt to send fixes upstream; however, waiting for
a response each and every time is just not really going to be an option here, so
in the meantime I've decided to import the entirety of pkgsrc into git and use
that tree as the "official" branch for HitchHiker. This will enable our project
to build up a substantial package collection even if upsteam pkgsrc begins to
lag behind.
One of the problematic packages that is now often being pulled in as a dependency is rustc, the rust compiler. It's problematic on two fronts. First, unless your system has substantial amounts of memory a parallel build will fail as the memory fills up (mine has 8 gig and still fails). Second, and quite frustrating, is that rustc is only compatible with certain versions of either openssl or libressl, and our version of libressl was too new. After seeing if we could use either libressl or openssl from pkgsrc and encountering repeated failures, I just rolled back libressl to the newest version that rust supports.
Let me pause here for a brief commentary on rust as a programming language. There is a lot to like about rust, and it seems that the language may be with us for quite some time, as it seems to see more usage every day and has achieved a critical mass of developers. I've played around with it myself a little bit and the documentation is very good, it's probably one of the easier to learn languages, and the ecosystem as a whole is very nice when you look at all of the features that cargo brings to the table. However, I am not a fan of the fact that the rust standard library, libstd, is compiled in statically by default. This results in a simple "Hello, world!" executable compiling into a 7.1m binary on my machine before stripping, which comes down to 220k after stripping. When compiling the same program dynamically it comes all the way down to 16k after stripping, so we have approximately a 204k overhead *per binary* for the static libstd.
I am also not happy about rolling back libressl, as I would much rather have the newest possible version of what is a security critical piece of software. However, it is I believe still a better and more secure option than openssl, which most distros are still using. It's a shame that a major piece of software like rust, that is a dependency of quite a number of other packages now, has not updated to use the newer abi presented by newer versions of libressl.
After a couple weeks of building packages on both x86_64 and armv7l, using an identical set of packages in the base system, it is very apparent just how much more development x86_64 gets than arm. Two packages in particular illustrate this quite well - libdrm and MesaLib - both of which are major dependencies of modular Xorg. In both cases assembly code is generated by gcc that contains instructions not supported by the processor, and the only fix is to disable features in the Arm build. In the case of MesaLib this involves patching the source code. Thanks to the Arch Linux Arm team for showing the way, as my patches are adapted from their PKGBUILD.
There have been just a few more tweaks to the base system as I attempt to fully
finalize the build in preparation for release. The rpcgen utility that I had
incorporated proved to not fully support all features needed for building certain
packages in pkgsrc. After some search I found a newer version that had been
autotooled in the rpcsvc-proto package, which was originally a compatibility
package during the time that SunRPC was being removed from Glibc. I've imported
the source for just the rpcgen utility from that package into hitchhiker's build
tree, removed all of the autotools material, moved all C-preprocessor flags into
a config.h file, disabled NLS and made a very lean, mean Makefile for it. While
I was at it I used similar techniques to update the builds for mksh and pax.
Both of those packages now include their CPPFLAGS as defines in config.h, and I
have even begun eliminating ifdefs out of the C files to make for a cleaner and
faster build.

View File

@ -0,0 +1,35 @@
+++
title = "Porting NetBSD userland, infrastructure improvements"
date = 2020-08-17
[taxonomies]
tags = ["NonGNU","Porting"]
+++
As noted previously, I have some issues with the complexity of GNU autotools and the bloat that it introduces into building what should be very simple programs. I pointed out coreutils as one of the worst offenders at that time, and have been exploring ways to either port coreutils to a simpler build system or outright replace the package.<!-- more -->
In context, there is nothing particularly wrong with the utilities themselves and functionally they work as expected. One benefit that coreutils provide over almost every other implementation is improved localization. However, there is complexity that is at times not really justified. Let's take the utility program "true" as an example, whose sole purpose is to do nothing and exit with success. A simple C program can be constructed to perform this function perfectly well with the following code:
```C
#include <stdio.h>
int main() { return 0; }
```
This simple code compiles to a 16k executable (stripped) and does everything that we need it to do. But we can do even better actually, with a single line shell script:
```Bash
#!/bin/sh
exit 0
```
This little gem reads as a whopping 4k on disk on my machine and also fulfills exactly what we expect the program to do. So why exactly is the file true.c in coreutils 80 lines that compiles to a 40k executable after stripping?
I had at one point tried replacing coreutils and util-linux completely with the sbase and ubase packages from [suckless.org](https://suckless.org/). The main issue with doing this is that not all of the utilities are feature complete yet. They are fine for day to day use navigating around in a shell environment, but begin to fail when running scripts that are meant to be portable or building software, when an unsupported flag causes the utility call to fail with an error. I do have a branch containing the entirety of sbase and ubase with the source reconfigured to match our build tree layout of one program per directory, and building using hhl.cprog.mk.
I have always had a lot of admiration for BSD systems and ran FreeBSD as my main OS on several machines for a number of years. Too much is actually made of the differences between a BSD userland nad a GNU userland, as the vast majority of the time they are functionally equivalent. As mentioned previously, the GNU utilities have better localization. They also accept GNU long options. I have never found the lack of long options to be an issue; on the contrary short options are faster to type and are generally the go to choice for those familiar with the shell interface. Therefore I decided to start working on porting BSD userland to Linux with Glibc, using our HitchHiker build system. As there are a great many small utilities this is a process that isn't going to happen overnight, but has already yielded a surprising amount of success with modest effort.
There are a few projects already in place that do something similar, such as [lobase](https://github.com/Duncaen/lobase), that approach the issues of missing functions and macros by creating a compatibility library and then linking the utilities to it. While this is a perfectly valid approach, it has the drawback of increasing code size somewhat. I have also found so far that simply removing macros that don't exist on a Glibc system is enough to get the code to compile with a simple ```gcc -Wall file.c```. At other times one can simply translate from one function to another, for instance from strlcpy to strncpy, the latter which is available with Glibc and does very near the same thing.
Without getting too further involved in the details, so far I have working copies of the utilities apply, banner, basename, cat, dirname, grep and tr taken from NetBSD. Grep was previously a separate package that has been removed. We previously already had the awesome pax utility imported from MirOS, and have now removed GNU tar in favor of a simlink to pax, which functions very well as a tar replacement. Additionally, the utilities true, false, and which have been replaced with one line shell scripts (which is a zsh builtin, which we can call from another shell with a one line zsh script).
The result so far is a mixed userland that is predominately composed of GNU utilities with a sprinkling of BSD licensed utilities thrown in. The endgame is to replace the bulk of the GNU utilities with ports of the BSD versions, including many utilities such as apply, banner and pax that do not traditionally even exist on a Linux system. I may supplement this over time with utilities taken from sbase, ubase, or lobase as difficulties are encountered in porting the NetBSD utils, but would like to do as much straight porting against the actual system libraries as possible, as opposed to linking against compatibility libraries.
On to the build tree infrastructure improvements mentioned in the post title. A few posts back I mentioned standardizing the way that we handle packages that need extra steps beyond "make install". Similarly, I'm working on simplifying the use of build systems other than autotools. One feature of autotools that we leverage by default is that we can build outside of the source tree in a separate object directory. This is the default, but not all packages support this. There are even a few packages out there that mostly mimic autotools from an end user perspective, having a home-brewed configure script, but generally do not support building in an object directly. So now by simply setting the variable ${no_objdir} we can handle those cases in a unified way, and our build system knows that ${objdir}/.dirstamp is not a dependency of the configuration stage for instance.
Additionally, some packages eschew any kind of configuration step entirely and build just by running make. We now handle this in a more unified manner as well, rather than on a per-package basis, by setting ```${use_configure}``` to false.
These changes are minor, but are going to go a long way towards implementing a ports tree with a more concise and understandable codebase.

View File

@ -0,0 +1,39 @@
+++
title = "Reconfiguring the build process"
date = 2021-02-01
[taxonomies]
tags = ["cross-compile","sysroot"]
+++
Up until now HitchHiker has used a build process that closely followed that used
by the Linux From Scratch book. There was a temporary toolchain, hacked to use a
different library directory and prefix
<!-- more -->
so as not to conflict with the host system or with the libraries and tools being installed as the finished system. This temporary toolchain was in itself a complete development environment that included everything required to bootstrap a full system. To build to final system, is was necessary to chroot into a new directory containing the temporary tools and root filesystem.
This method had some advantages, but also involved compiling most packages at least twice. It also required quite a lot of extra work to do any kind of cross compiling. In fact, I had set up an RPI4 specifically as a compilation machine for Armv7l, as it was basically impractical to use my x86_64 laptop to develop for Arm.
In an effort to increase the flexibility of the HitchHiker build tree, support more architectures, and to streamline the build I am experimenting with using a sysroot compiler for the majority of the build. In testing, with minimal effort I was able to set up cross toolchains for armv7l, aarch64, riscv64, and i486 as well as a native x86_64 toolchain using the same build tree and only adjusting the ${arch} variable in config.mk. In testing I was able to compile almost the complete userland for x86_64, aarch64 and riscv64 without resorting to a chroot, and all on my main machine. My current plan is to compile most of the base system using a sysroot toolchain, either native or cross, including the build of the native toolchain, and then to chroot into the near complete root filesystem to finish up any packages that cannot be easily cross compiled. At this juncture I believe the only package in the base system that will be truly problematic for cross-compilation is Perl, which uses an ill conceived homegrown configure script instead of autotools. However this is not quite verified for all packages just yet.
In the case of a package refusing to cross-compile, I can then use the method outlined in the previous post and compile with the native toolchain using qemu user-mode emulation.
In addition to the vast reduction of packages being compiled twice or more, this also has the advantage of running on a fast machine. Additionally, with most of the work not requiring the chroot trick, there is no emulation layer to slow the process down. A future goal will be to fix the issues with Perl and push patches upstream if possible, as Perl is a hard build dependency of the kernel and therefore must be considered an essential part of the base system.
To give some idea of the resource savings this brings about, here's a quick table outlining the number of passes required for certain packages under the old layout vs the new. Note that it is still necessary to make multiple passes at Binutils and Gcc, as we have to build our sysroot/cross toolchain, but there is a rather significant reduction in the amount of other software being built twice.
Package | Passes (old) | Passes (new)
--- | --- | ---
Binutils | 3 | 2
Gcc | 3 + separate libstdc++ pass | 3
Linux Headers | 2 | 1
Glibc | 2 | 1
M4 | 2 | 1
Ncurses | 2 | 1
Bison | 2 | 1
Diffutils | 2 | 1
Gettext | 2 | 1
Make | 2 | 1
Perl | 2 | 1
Python | 2 | 1
Texinfo | 2 | 1
Patch | 2 | 1
Busybox | 1 | Not Built

View File

@ -0,0 +1,33 @@
+++
title = "Restarting make, buildworld cleanup, default software"
date = 2020-08-03
[taxonomies]
tags = ["GNU Make","Milestones","Roadmap"]
+++
One of the deficiencies in the HitchHiker build tree up to this point has been dealing with packages that require some manual intervention after the "make install" step. Originally there was no unified infrastructure to deal with this scenario. Very quickly<!-- more --> during testing I realized that this was an omission that was going to cause problems, as the scenario was much more common than I at first realized and implementing solutions on a per-package basis is both quite error prone and likely to introduce bugs into the build process. Up until the workarounds have involved redefining the .DEFAULT_GOAL variable and creating custom targets to finish the installation, while making the basic installation a dependency of said targets.
A common problem that arose out of this hodgepodge approach is that if the new targets were ".PHONY" targets (a target that is not an actual file) or the targets were symlinks then if the build was stopped it would error out on resuming, due to the timestamps of dependencies being the same or newer than their targets, or else a target could not actually be run because it involved moving some file that we already moved the first time it ran. This made testing new features a laborious process, as often any errors required removing the entire rootfs and restarting buildworld from the beginning.
When I implemented hhl.cprog.mk, I took steps to remedy this situation. If one defines the variable "finish" then the system will know that there are steps to be taken after "make install". Then any additional steps can be added to the new target {objdir}/.finished, which is the second dependency of the "finish" target after "install". This worked well in practice, so I have implemented the same functionality in targets.mk which is used by all "external" source packages (those packages for which the source is not included in the HitchHiker build tree but downloaded as tarballs during the build).
As a result of this, after considerable tweaking, the build can now be restarted from any point after stopping and will run to completion. I consider this to be an important milestone, as it makes future development much less arduous.
An important goal of HitchHiker is that the base system includes everything needed to bootstrap a full system. As such, it is unfortunately required to include certain packages that will not necessarily be required under all circumstances. A good example of this is the networking stack, as it is next to impossible to bootstrap a full system without a network connection, but we don't know if that connection is going to be wired, wireless, secured wireless, static or dynamic, etc. The first test builds of the complete system ignored this completely and only included what is needed to manually set up a static, wired connection.
This situation is also now remedied, as the officially distributed builds will include the dhcpcd, wireless-tools and wpa_supplicant packages. However, it is entirely possible to exclude any or all of these packages by setting appropriate variables in the file config.mk in the top level of the source tree.
There are now a number of things that can be tweaked when building the system from source. One can specify a list of locales to compile for glibc rather than the default of compiling all supported locales. By setting the variable rpi to true, you get the Raspberry Pi foundation kernel instead of linux mainline. And a custom kernel configuration file can be substituted for the default HitchHiker one. None of these tweaks should change the essence of the system being HitchHiker, but just allow the user some extra choice and the ability to slim the system down a fair bit, as well as speeding up the build somewhat.
Lastly, I want to talk about the roadmap for HitchHiker a little bit. Regarding the build tree itself, at the moment we currently have the .mk file targets.mk which includes most of the logic to build all of the packages that are downloaded and compiled as part of the base system. This file's internal logic defaults to using GNU autotools, but is flexible enough to build packages in other ways by resetting appropriate variables. This approach does work, but requires some deep knowledge of how it all is put together. As eventually I would love to see my work finding a larger audience I would like to make the infrastructure a little bit easier to understand, so for example when we graduate from pkgsrc to our own ports tree there could be some community involvement in fleshing out a complete ports tree.
To that end, a future goal is going to be removing the autotools logic into an "autotools.mk" file and creating, for instance, "cmake.mk" or "waf.mk", which will know how to build packages using those build systems. At that point one could set the build system to use "waf" via a variable assignment and then include hhl.ports.mk, which would then pull in "waf.mk". Doing this cleanly is an important step in the way to creating a ports tree that will be easily debugged and easy to maintain and update.
This theoretical "hhl.ports.mk" will also need logic for tracking everything to be installed by each and every package, uninstalling packages and upgrading to newer versions. In short it will need much of the logic of a package manager. I have actually implemented most of this previously in the original HitchHiker system of ten years ago. At that point installed files were tracked by creating a timestamp file before "make install" and then using the find utility to catch any files with a newer timestamp. This had the attractive benefit of being completely agnostic to build system, but is not without it's drawbacks. Chief among the drawbacks is that it is a rather time consuming approach, but there is also the danger that we will miss a file if it is created with the wrong time stamps, or may capture an incorrect file that has been created by another running process. For those reasons I'm planning to implement it by installing into a DESTDIR. The problems here are that one needs to be familiar with multiple build systems to know how to achieve this, and also that the concept of a DESTDIR is not universally implemented in all pieces of software out there. For the former, the solution is good old fashioned grunt work to learn the ins and outs of each build system as it is encountered. For the latter, the approach is going to be patching the build system and submitting patches upstream, with the hope that the authors will be receptive.
I'm going to interject at this point to point out that pkgsrc uses an entirely different approach by requiring every package directory to include a PLIST file, thus placing the burden squarely on the shoulders of the maintainer to populate this file with every file the package will install on your system. FreeBSD also uses this approach. Functionality could be included into "hhl.ports.mk" to use a PLIST file if it exists, skipping any automatic file discovery.
Then there is another elephant in the room - dependency tracking. The previous incarnation of HitchHiker kept a flat file database of every file that every package in the ports tree might install, ran "file" on every file in the new package to determine it's file type, then ran ldd on every elf file and compared the results to the database. It was quite effective at catching any binary dependencies. It was also slower than shit on large packages and extremely overkill much of the time. Knowing a bit better now I have no intention of going down that rabbit hole again. Instead, dependencies are just going to be manually specified by the developer in each port's Makefile. While this is somewhat error prone, it can be refined over time to be near perfect and has the advantage of speed. In addition, not all dependencies are library dependencies. Some dependencies are of the nature of expecting another program to be available to exec() at runtime, and some dependencies are simply build time requirements.
Now here's the point where I'm going to get controversial. BSD ports implement all of this using just make and for years had nothing like the traditional Linux package manager, instead relying on simple command line utilities to manipulate packages. I -don't- want a package manager in HitchHiker. I don't want to reinvent the wheel and write a new one, and none of the existing ones are going to be suitable. I think they're generally just complicated, poorly understood by the average user, and since there are already so many implementations it's virtually impossible at this time for one of them to become "standard", thus basically making the idea of the Linux desktop an unachievable dream. No company is going to bother supporting dpkg, rpm, Slackware tarballs, Arch tarballs, Flatpack and Ubuntu snaps. It's not happening. At most they're going to release an rpm, a deb or a snap and just ignore everybody else. The ideas of the commercial world embracing desktop Linux, the year of desktop Linux, etc are all basically pipe dreams.
Here's what I picture instead. If we can manipulate source archives into binary packages and then install those packages all using make, we can frankly do the same with binary packages. BSD port trees and pkgsrc had already proven that would work years ago. In fact, in FreeBSD you can install binary packages that are dependencies of the port that you are building rather than building them from source, or build ports that don't have a binary package available when installing binary packages. This system functioned just fine for a long time. Their mistake a few years ago was jumping on the Linux bandwagon and writing yet another package manager, pkg. Since that time I've noticed quite a lot more issues keeping FreeBSD packages up to date, to the point where I even abandoned an installation at one point. So we're skipping it entirely, and instead I'm going to provide a "binary-install" target in "hhl.ports.mk" that will fetch and install binary packages. Easy peasy, no package manager required, unless you consider the ports tree itself to be a package manager.

View File

@ -0,0 +1,50 @@
+++
title = "Some More Home Cooking"
date = 2020-10-06
[taxonomies]
tags = ["C programming","Utilities","NonGNU"]
+++
After removing GNU coreutils and taking closer stock of what we have as a
"base system", I am mostly happy with the selection but now wish to spend some
time making behaviors, documentation and source code itself more uniform. I have
also done some work towards correcting a couple of omissions that are present
due to the previous surgury. The commit that I pushed this morning has a few
notable changes.
<!-- more -->
* hostid - This is a non-POSIX utility of GNU origin that is present on systems with GNU coreutils but not on BSD based systems. I have written a replacement from scratch in C, which was a fairly trivial task as mentioned previously. The new utility, much like the rest of our userland, behaves like the GNU counterpart with the exception of not accepting long options and not having a help option (help is available via the man-pages, so '--help' is always redundant IMO).
* nproc - Another scratch implementation of a utility with a GNU orgin.
* rev - A scratch rewrite for HitchHiker, should be functionally identical to the BSD and/or Suckless versions. Notably, my version, the BSD version (which we had previously) and the suckless version are all quite a bit faster than the rev utility provided by the util-linux package due to architectural differences which I'll explain below.
* mkdir - Another scratch rewrite. These are being done patially as an exercise in C programming and partially to lessen our dependence on the portability libraries used to port BSD utilities to Linux. However, this utility is currently a special case in that regard (see below).
* base64 - This one had been ported from NetBSD but is actually broken on every platform in regards to being able to process it's data from a file rather than stdin. It's now fixed at least in HitchHiker.
I have also begun a process of formatting the source code imported from other sources using ```clang-format``` to enforce more uniform standards, and to go through the man-pages a few at a time to do some similar housekeeping. It's a small niggle, but both BSD and Suckless use tabs for indentation where I prefer two spaces for compactness. There are also some inconsistencies in brace styles and function declarations. I prefer the function declaration to be on one line followed by the opening brace on the same line, and for things like loops to include the opening brace on the same line as the loop initialization. Basically, the defaults provided by clang-format make for nice, readable and consistent code.
Now, as mentioned above, let's get into some detail with the implementations of
**rev** and **mkdir**. Let's begin with rev, which is a simple utility which just reverses the characters in each line. Now, a simple implementation of rev, and basically where I started, would just read each line using getline (or fgetln on a BSD system) and reverse the bytes, excluding the newline, which is replaced back at the end of the output. This works fine for ascii characters, which all by definition fir into a single byte of data. However, utf8 is a fact of life, and we quickly run into a situation where we have reversed the byte order of a multibyte character, printing gibberish to the terminal. The Util-Linux utility solves this by reading each line into an array as a sequence of widechars, essentially reading character by character, and then doing the reversal.
The BSD, Suckless, and now HitchHiker utilities all use a similar, and more efficient approach. The line is read as described above, then, starting with the charcter preceding the newline, each byte is tested to see whether it is the beginning of a character or the middle of a multibyte character. If we run into a multibyte character, we find the beginning of it and then read forward again to the end of the character, before skipping backwards again past the multibyte character. While sounding convoluted, this is quite a bit faster when processing large amounts of data, as we're only looking at the first few bits of each byte rather than reading each line in character by character.
Now on to mkdir. Most of this was straightforward to implement, including the -p option, as we just have to construct a path directory by directory by processing the path as a string, using, in our case, the strtok function. However, mkdir is also expected to be able to set the Unix permissions of directories that it creates, and accept the mode arguments in either octal or symbolic format. It is trivial to implement the octal permissions, but implementing symbolic permissions entails coding a parser which must accept a fairly wide range of possible permutations. After looking at how the functionality has been implemented in BSD, GNU, and Suckless utilities, it quickly became apparent that the BSD implementation makes the most sense from the efficient use of code standpoint, as the BSD C-library already contains the getmode and setmode functions. As this has already been ported to Linux via libbsd, and as such is already present in HitchHiker as it has been used to port much code from both NetBSD and FreeBSD, I decided to just go ahead and use it. On a side note, it would be fantastic to see some of these functions present in the GNU C-library. The getmode function is quite useful, and fgetln is quite a bit more, let's say graceful, to use than getline.
It should be noted that static linking libbsd into a Beerware licensed utility is somewhat problematic. I have been considering extending the build system somewhat, to also build shared libraries alongside the static archives. I am somewhat hesitant, as I feel that low level base utilities such as this should only depend on the system C library at runtime. Alternatively, if at some point in the future HItchHiker does make the switch to Musl libc (which is still on the table) then it might be possible to patch musl to incorporate the functions right into the system C library for HitchHiker, allowing for their removal from libbsd and lessening our reliance on it for porting utilities. I rather like this train of thought...
The end goal is, of course, the kind of tightly integrated userland that BSD
systems are noted for. When looking at the various base utility implementations,
it is striking that commonly there is a library of common functions built and
linked into the utilities. This tactic is employed by sbase, ubase, GNU coreutils,
Util-Linux, and lobase (the port of OpenBSD userland to Linux, from which I have
borrowed heavily). I'm employing it myself by using libbsd to port NetBSD and
FreeBSD utilities. In the long run, I want to eliminate this trend and fully
integrate everything to where we're only depending on the C library both at build
and run times. My own implementations, with the exception of mkdir, currently do
this. As an example, on BSD systems a program is aware of the name under which it
was invoked via the getprogname and setprogname functions. As GNU libc does not
have these functions, we have a global constant char __progname, which is set to
the program name by calling ```basename(argv[0])``` in the main function. While
this results in some additional boilerplate code, it's actually scarcely more
than what BSD already has due to the need to call getprogname and/or setprogname.
Similarly, Suckless abstracts away certain things like printing to stderr (their
eprintf function) and getting program arguments, which are easily done just using
```fprintf(stderr, "msg")``` and getopt, respectively.
A possible eventuality might involve refactoring much of this code somewhat to remove the dependencies on their respective utility libraries and replacing it with more "generic" programming. While this might increase source code size somewhat, it would not likely impact compiled size or efficiency, as we're just removing the abstractions and putting them back into the programs. This has the benefit of making the code easier to understand without having to look at the library and what it does.

View File

@ -0,0 +1,100 @@
+++
title = "Website Redesign"
date = 2021-02-18
[taxonomies]
tags = ["Site News","Web Programming","css","html","Zola","mdbook"]
categories = ["Site News"]
+++
I know a little bit about a lot of things, and one of the things that I know
only in passing is web development.<!-- more --> I am by no means qualified at
or excited about web design. However, in the spirit of DIY I have always muddled
through all of my own projects. This has in the past meant using a CMS of some
sort, and a rather long time ago I settled on Mezzanine, a Django-based CMS
written in Python.
This worked out all right in most ways. However, a few weeks back I had a server
outage that I traced back to the update from Python 3.8 to Python 3.9. The
virtualenv that I had set up simply stopped working, causing the web server to be
unable to start as the default handler was DOA. Now, a Python virtualenv is a
supposedly complete standalone environment containing the Python interpretor and
libraries frozen to be version compatible with your application. It is supposed
to be self contained and not dependent upon the system-wide Python installation.
The fact that bumping a minor version number caused it to fail is distressing
and has soured my already low opinion of Python somewhat.
Additionally, I have long desired to incorporate a full handbook into the website
and do so in a way that looks seemless with the rest of the site. HitchHiker is
not going to be a turnkey system for most people and will need good documentation.
I have tried a few things that were available online, including Mezzanine Wiki,
but nothing is really suitable and many packages are broken and/or unmaintained
(including Mezzanine Wiki at the time I tried it).
Being impressed with the quality of the documentation for Rust I looked into what
they use. The Rust Book is built using [mdbook](https://github.com/rust-lang/mdBook),
which is a wonderful static markdown to html generator that does exaclt what it
claims to do and does it exceptionally well. After messing with it for all of two
minutes I was sold - it's just plain phenominal in use.
Having settled on mdbook for the forthcoming "**Hitch Hiker's Guide to Hitch Hiker
Linux**" I decided that if we're going static pages there, it would be a good
time to go static pages everywhere. I set about looking for a static site generator
that would give me enough flexibility for my needs and still be simple to use.
While there is no shortage of SSG's available right now, there were several popular
ones that I never really considered based on implementation. Jekyll, for one, is
written in Ruby and I'm not looking to pollute my machine with more interpreted
language cruft.
I gave [Hugo](https://gohugo.io/) a long, hard look and went so far as to install
it and do some testing. Hugo is written in Go, seems to work well, and is definitely
fast. However, none of the available themes were going to be suitable without a
good deal of hacking.
At first glance, [Zola](https://www.getzola.org/) appeared to have the same
strengths and the same issues as Hugo for my use case. I really couldn't see
applying any of the themes that I found directly without a lot of customization,
and there were a couple that I tried that were flat out broken. On top of whatever
customization I was going to have to do to whatever theme I picked, I was also
going to have to adapt to an mdbook theme as well....or was there another way?
It turns out that the default set of themes that ship with mdbook are pretty much
already great, and would look great adapted to an entire site. Now, mdbook itself
is pretty much not suitable for generating the entire site, as I need a front
page, I want to keep my blog, and I'd like to be able to incorporate other pages
as well that aren't going into the table of contents for the book. But what I
thought I could do is to create my own Zola template using the css from the mdbook
themes.
So in the end that's what I did. Zola, written in Rust, is actually quite a bit
simpler than Hugo and after a few initial trip ups getting started I'm finding
the template system to be something I can navigate all right. I'm left with a two
step rendering process to build the full site, as I'm rendering most of the site
using Zola and the book using mdbook. But hey, I've already proven that I can do
a good bit of automation using ```make```, right?
While there's a bit more work left to finish migrating the old site, I have to
say that I'm completely satisfied with my choices as of now. Both Zola and mdbook
do things for me that were not going to be easily achievable in Mezzanine and come
with a lot of benefits right out of the box. As a for instance, built in syntax
highlighting for code snippets! Here's ```echo``` in Rust:
```Rust
#![warn(clippy::all, clippy::pedantic)]
use std::env;
fn main() {
let args: Vec<String> = env::args().collect();
let len = args.len();
let n = len > 1 && args[1] == "-n";
let i = if n { 2 } else { 1 };
for (index, arg) in args.iter().enumerate().skip(i) {
if index < len - 1 {
print!("{} ", arg);
} else {
print!("{}", arg);
}
}
if !n {
println!();
}
}
```

418
public/404.html Normal file
View File

@ -0,0 +1,418 @@
<!DOCTYPE html>
<html lang="en" class="sidebar-visible no-js light">
<head>
<meta charset="utf-8">
<title>Hitch Hiker Linux</title>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
<meta name="description" content="">
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta name="theme-color" content="#ffffff" />
<link rel="icon" href="&#x2F;favicon.svg">
<link rel="shortcut icon" href="&#x2F;favicon.png">
<link rel="stylesheet" href="&#x2F;handbook&#x2F;css&#x2F;variables.css">
<link rel="stylesheet" href="&#x2F;handbook&#x2F;css&#x2F;general.css">
<link rel="stylesheet" href="&#x2F;handbook&#x2F;css&#x2F;chrome.css">
<link rel="stylesheet" href="&#x2F;handbook&#x2F;css&#x2F;print.css" media="print">
<!-- Fonts -->
<link rel="stylesheet" href="&#x2F;handbook&#x2F;FontAwesome&#x2F;css&#x2F;font-awesome.css">
<link rel="stylesheet" href="&#x2F;handbook&#x2F;fonts&#x2F;fonts.css">
<!-- Highlight.js Stylesheets -->
<link rel="stylesheet" href="&#x2F;handbook&#x2F;highlight.css">
<link rel="stylesheet" href="&#x2F;handbook&#x2F;tomorrow-night.css">
<link rel="stylesheet" href="&#x2F;handbook&#x2F;ayu-highlight.css">
<!-- Custom Stylesheets -->
<link rel="stylesheet" href="&#x2F;hhl.css">
</head>
<body>
<!-- Provide site root to javascript -->
<script type="text/javascript">
var path_to_root = "&#x2F;handbook";
var default_theme = window.matchMedia("(prefers-color-scheme: dark)").matches ? "navy" : "light";
</script>
<!-- Work around some values being stored in localStorage wrapped in quotes -->
<script type="text/javascript">
try {
var theme = localStorage.getItem('mdbook-theme');
var sidebar = localStorage.getItem('mdbook-sidebar');
if (theme.startsWith('"') && theme.endsWith('"')) {
localStorage.setItem('mdbook-theme', theme.slice(1, theme.length - 1));
}
if (sidebar.startsWith('"') && sidebar.endsWith('"')) {
localStorage.setItem('mdbook-sidebar', sidebar.slice(1, sidebar.length - 1));
}
} catch (e) { }
</script>
<!-- Set the theme before any content is loaded, prevents flash -->
<script type="text/javascript">
var theme;
try { theme = localStorage.getItem('mdbook-theme'); } catch(e) { }
if (theme === null || theme === undefined) { theme = default_theme; }
var html = document.querySelector('html');
html.classList.remove('no-js')
html.classList.remove('light')
html.classList.add(theme);
html.classList.add('js');
</script>
<!-- Hide / unhide sidebar before it is displayed -->
<script type="text/javascript">
var html = document.querySelector('html');
var sidebar = 'hidden';
if (document.body.clientWidth >= 1080) {
try { sidebar = localStorage.getItem('mdbook-sidebar'); } catch(e) { }
sidebar = sidebar || 'visible';
}
html.classList.remove('sidebar-visible');
html.classList.add("sidebar-" + sidebar);
</script>
<nav id="sidebar" class="sidebar" aria-label="Table of contents">
<div class="sidebar-scrollbox">
<ol class="chapter">
<li class="chapter-item affix ">
<a href="/">Home</a>
</li>
<li class="chapter-item affix ">
<a href="/news/">News</a>
</li>
<li class="chapter-item affix ">
<a href="/about/">About</a>
</li>
<li class="chapter-item affix ">
<a href="/pub/">Download</a>
</li>
<li class="chapter-item affix ">
<a href="https://git.hitchhiker-linux.org">Source</a>
</li>
<li class="spacer"></li>
<li class="chapter-item affix ">
<a href="/handbook/">Handbook</a>
</li>
</ol>
</div>
<div id="sidebar-resize-handle" class="sidebar-resize-handle"></div>
</nav>
<div id="page-wrapper" class="page-wrapper">
<div class="page">
<div id="menu-bar-hover-placeholder"></div>
<div id="menu-bar" class="menu-bar sticky bordered">
<div class="left-buttons">
<button id="sidebar-toggle" class="icon-button" type="button" title="Toggle Table of Contents" aria-label="Toggle Table of Contents" aria-controls="sidebar">
<i class="fa fa-bars"></i>
</button>
<button id="theme-toggle" class="icon-button" type="button" title="Change theme" aria-label="Change theme" aria-haspopup="true" aria-expanded="false" aria-controls="theme-list">
<i class="fa fa-paint-brush"></i>
</button>
<ul id="theme-list" class="theme-popup" aria-label="Themes" role="menu">
<li role="none"><button role="menuitem" class="theme" id="light">Light (default)</button></li>
<li role="none"><button role="menuitem" class="theme" id="rust">Rust</button></li>
<li role="none"><button role="menuitem" class="theme" id="coal">Coal</button></li>
<li role="none"><button role="menuitem" class="theme" id="navy">Navy</button></li>
<li role="none"><button role="menuitem" class="theme" id="ayu">Ayu</button></li>
</ul>
</div>
<h1 class="menu-title">
Hitch Hiker Linux
</h1>
</div>
<!-- Apply ARIA attributes after the sidebar and the sidebar toggle button are added to the DOM -->
<script type="text/javascript">
document.getElementById('sidebar-toggle').setAttribute('aria-expanded', sidebar === 'visible');
document.getElementById('sidebar').setAttribute('aria-hidden', sidebar !== 'visible');
Array.from(document.querySelectorAll('#sidebar a')).forEach(function(link) {
link.setAttribute('tabIndex', sidebar === 'visible' ? 0 : -1);
});
</script>
<div id="content" class="content">
<main>
<h1>Page not found!</h1>
The requested URL was not found on this server. If you entered the URL manually
please check your spelling and try again.
<h2>Error 404</h2>
</main>
</div>
</div>
</div>
<script>
(function themes() {
var html = document.querySelector('html');
var themeToggleButton = document.getElementById('theme-toggle');
var themePopup = document.getElementById('theme-list');
var themeColorMetaTag = document.querySelector('meta[name="theme-color"]');
var stylesheets = {
ayuHighlight: document.querySelector("[href$='ayu-highlight.css']"),
tomorrowNight: document.querySelector("[href$='tomorrow-night.css']"),
highlight: document.querySelector("[href$='highlight.css']"),
};
function showThemes() {
themePopup.style.display = 'block';
themeToggleButton.setAttribute('aria-expanded', true);
themePopup.querySelector("button#" + get_theme()).focus();
}
function hideThemes() {
themePopup.style.display = 'none';
themeToggleButton.setAttribute('aria-expanded', false);
themeToggleButton.focus();
}
function get_theme() {
var theme;
try { theme = localStorage.getItem('mdbook-theme'); } catch (e) { }
if (theme === null || theme === undefined) {
return default_theme;
} else {
return theme;
}
}
function set_theme(theme, store = true) {
let ace_theme;
if (theme == 'coal' || theme == 'navy') {
stylesheets.ayuHighlight.disabled = true;
stylesheets.tomorrowNight.disabled = false;
stylesheets.highlight.disabled = true;
ace_theme = "ace/theme/tomorrow_night";
} else if (theme == 'ayu') {
stylesheets.ayuHighlight.disabled = false;
stylesheets.tomorrowNight.disabled = true;
stylesheets.highlight.disabled = true;
ace_theme = "ace/theme/tomorrow_night";
} else {
stylesheets.ayuHighlight.disabled = true;
stylesheets.tomorrowNight.disabled = true;
stylesheets.highlight.disabled = false;
ace_theme = "ace/theme/dawn";
}
setTimeout(function () {
themeColorMetaTag.content = getComputedStyle(document.body).backgroundColor;
}, 1);
if (window.ace && window.editors) {
window.editors.forEach(function (editor) {
editor.setTheme(ace_theme);
});
}
var previousTheme = get_theme();
if (store) {
try { localStorage.setItem('mdbook-theme', theme); } catch (e) { }
}
html.classList.remove(previousTheme);
html.classList.add(theme);
}
// Set theme
var theme = get_theme();
set_theme(theme, false);
themeToggleButton.addEventListener('click', function () {
if (themePopup.style.display === 'block') {
hideThemes();
} else {
showThemes();
}
});
themePopup.addEventListener('click', function (e) {
var theme = e.target.id || e.target.parentElement.id;
set_theme(theme);
});
themePopup.addEventListener('focusout', function(e) {
// e.relatedTarget is null in Safari and Firefox on macOS (see workaround below)
if (!!e.relatedTarget && !themeToggleButton.contains(e.relatedTarget) && !themePopup.contains(e.relatedTarget)) {
hideThemes();
}
});
// Should not be needed, but it works around an issue on macOS & iOS: https://github.com/rust-lang/mdBook/issues/628
document.addEventListener('click', function(e) {
if (themePopup.style.display === 'block' && !themeToggleButton.contains(e.target) && !themePopup.contains(e.target)) {
hideThemes();
}
});
document.addEventListener('keydown', function (e) {
if (e.altKey || e.ctrlKey || e.metaKey || e.shiftKey) { return; }
if (!themePopup.contains(e.target)) { return; }
switch (e.key) {
case 'Escape':
e.preventDefault();
hideThemes();
break;
case 'ArrowUp':
e.preventDefault();
var li = document.activeElement.parentElement;
if (li && li.previousElementSibling) {
li.previousElementSibling.querySelector('button').focus();
}
break;
case 'ArrowDown':
e.preventDefault();
var li = document.activeElement.parentElement;
if (li && li.nextElementSibling) {
li.nextElementSibling.querySelector('button').focus();
}
break;
case 'Home':
e.preventDefault();
themePopup.querySelector('li:first-child button').focus();
break;
case 'End':
e.preventDefault();
themePopup.querySelector('li:last-child button').focus();
break;
}
});
})();
(function sidebar() {
var html = document.querySelector("html");
var sidebar = document.getElementById("sidebar");
var sidebarLinks = document.querySelectorAll('#sidebar a');
var sidebarToggleButton = document.getElementById("sidebar-toggle");
var sidebarResizeHandle = document.getElementById("sidebar-resize-handle");
var firstContact = null;
function showSidebar() {
html.classList.remove('sidebar-hidden')
html.classList.add('sidebar-visible');
Array.from(sidebarLinks).forEach(function (link) {
link.setAttribute('tabIndex', 0);
});
sidebarToggleButton.setAttribute('aria-expanded', true);
sidebar.setAttribute('aria-hidden', false);
try { localStorage.setItem('mdbook-sidebar', 'visible'); } catch (e) { }
}
var sidebarAnchorToggles = document.querySelectorAll('#sidebar a.toggle');
function toggleSection(ev) {
ev.currentTarget.parentElement.classList.toggle('expanded');
}
Array.from(sidebarAnchorToggles).forEach(function (el) {
el.addEventListener('click', toggleSection);
});
function hideSidebar() {
html.classList.remove('sidebar-visible')
html.classList.add('sidebar-hidden');
Array.from(sidebarLinks).forEach(function (link) {
link.setAttribute('tabIndex', -1);
});
sidebarToggleButton.setAttribute('aria-expanded', false);
sidebar.setAttribute('aria-hidden', true);
try { localStorage.setItem('mdbook-sidebar', 'hidden'); } catch (e) { }
}
// Toggle sidebar
sidebarToggleButton.addEventListener('click', function sidebarToggle() {
if (html.classList.contains("sidebar-hidden")) {
var current_width = parseInt(
document.documentElement.style.getPropertyValue('--sidebar-width'), 10);
if (current_width < 150) {
document.documentElement.style.setProperty('--sidebar-width', '150px');
}
showSidebar();
} else if (html.classList.contains("sidebar-visible")) {
hideSidebar();
} else {
if (getComputedStyle(sidebar)['transform'] === 'none') {
hideSidebar();
} else {
showSidebar();
}
}
});
sidebarResizeHandle.addEventListener('mousedown', initResize, false);
function initResize(e) {
window.addEventListener('mousemove', resize, false);
window.addEventListener('mouseup', stopResize, false);
html.classList.add('sidebar-resizing');
}
function resize(e) {
var pos = (e.clientX - sidebar.offsetLeft);
if (pos < 20) {
hideSidebar();
} else {
if (html.classList.contains("sidebar-hidden")) {
showSidebar();
}
pos = Math.min(pos, window.innerWidth - 100);
document.documentElement.style.setProperty('--sidebar-width', pos + 'px');
}
}
//on mouseup remove windows functions mousemove & mouseup
function stopResize(e) {
html.classList.remove('sidebar-resizing');
window.removeEventListener('mousemove', resize, false);
window.removeEventListener('mouseup', stopResize, false);
}
document.addEventListener('touchstart', function (e) {
firstContact = {
x: e.touches[0].clientX,
time: Date.now()
};
}, { passive: true });
document.addEventListener('touchmove', function (e) {
if (!firstContact)
return;
var curX = e.touches[0].clientX;
var xDiff = curX - firstContact.x,
tDiff = Date.now() - firstContact.time;
if (tDiff < 250 && Math.abs(xDiff) >= 150) {
if (xDiff >= 0 && firstContact.x < Math.min(document.body.clientWidth * 0.25, 300))
showSidebar();
else if (xDiff < 0 && curX < 300)
hideSidebar();
firstContact = null;
}
}, { passive: true });
// Scroll sidebar to current active section
var activeSection = document.getElementById("sidebar").querySelector(".active");
if (activeSection) {
// https://developer.mozilla.org/en-US/docs/Web/API/Element/scrollIntoView
activeSection.scrollIntoView({ block: 'center' });
}
})();
</script>
</body>
</html>

451
public/about/index.html Normal file
View File

@ -0,0 +1,451 @@
<!DOCTYPE html>
<html lang="en" class="sidebar-visible no-js light">
<head>
<meta charset="utf-8">
<title>Hitch Hiker Linux</title>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
<meta name="description" content="">
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta name="theme-color" content="#ffffff" />
<link rel="icon" href="&#x2F;favicon.svg">
<link rel="shortcut icon" href="&#x2F;favicon.png">
<link rel="stylesheet" href="&#x2F;handbook&#x2F;css&#x2F;variables.css">
<link rel="stylesheet" href="&#x2F;handbook&#x2F;css&#x2F;general.css">
<link rel="stylesheet" href="&#x2F;handbook&#x2F;css&#x2F;chrome.css">
<link rel="stylesheet" href="&#x2F;handbook&#x2F;css&#x2F;print.css" media="print">
<!-- Fonts -->
<link rel="stylesheet" href="&#x2F;handbook&#x2F;FontAwesome&#x2F;css&#x2F;font-awesome.css">
<link rel="stylesheet" href="&#x2F;handbook&#x2F;fonts&#x2F;fonts.css">
<!-- Highlight.js Stylesheets -->
<link rel="stylesheet" href="&#x2F;handbook&#x2F;highlight.css">
<link rel="stylesheet" href="&#x2F;handbook&#x2F;tomorrow-night.css">
<link rel="stylesheet" href="&#x2F;handbook&#x2F;ayu-highlight.css">
<!-- Custom Stylesheets -->
<link rel="stylesheet" href="&#x2F;hhl.css">
</head>
<body>
<!-- Provide site root to javascript -->
<script type="text/javascript">
var path_to_root = "&#x2F;handbook";
var default_theme = window.matchMedia("(prefers-color-scheme: dark)").matches ? "navy" : "light";
</script>
<!-- Work around some values being stored in localStorage wrapped in quotes -->
<script type="text/javascript">
try {
var theme = localStorage.getItem('mdbook-theme');
var sidebar = localStorage.getItem('mdbook-sidebar');
if (theme.startsWith('"') && theme.endsWith('"')) {
localStorage.setItem('mdbook-theme', theme.slice(1, theme.length - 1));
}
if (sidebar.startsWith('"') && sidebar.endsWith('"')) {
localStorage.setItem('mdbook-sidebar', sidebar.slice(1, sidebar.length - 1));
}
} catch (e) { }
</script>
<!-- Set the theme before any content is loaded, prevents flash -->
<script type="text/javascript">
var theme;
try { theme = localStorage.getItem('mdbook-theme'); } catch(e) { }
if (theme === null || theme === undefined) { theme = default_theme; }
var html = document.querySelector('html');
html.classList.remove('no-js')
html.classList.remove('light')
html.classList.add(theme);
html.classList.add('js');
</script>
<!-- Hide / unhide sidebar before it is displayed -->
<script type="text/javascript">
var html = document.querySelector('html');
var sidebar = 'hidden';
if (document.body.clientWidth >= 1080) {
try { sidebar = localStorage.getItem('mdbook-sidebar'); } catch(e) { }
sidebar = sidebar || 'visible';
}
html.classList.remove('sidebar-visible');
html.classList.add("sidebar-" + sidebar);
</script>
<nav id="sidebar" class="sidebar" aria-label="Table of contents">
<div class="sidebar-scrollbox">
<ol class="chapter">
<li class="chapter-item affix ">
<a href="/">Home</a>
</li>
<li class="chapter-item affix ">
<a href="/news/">News</a>
</li>
<li class="chapter-item affix ">
<a href="/about/">About</a>
</li>
<li class="chapter-item affix ">
<a href="/pub/">Download</a>
</li>
<li class="chapter-item affix ">
<a href="https://git.hitchhiker-linux.org">Source</a>
</li>
<li class="spacer"></li>
<li class="chapter-item affix ">
<a href="/handbook/">Handbook</a>
</li>
</ol>
</div>
<div id="sidebar-resize-handle" class="sidebar-resize-handle"></div>
</nav>
<div id="page-wrapper" class="page-wrapper">
<div class="page">
<div id="menu-bar-hover-placeholder"></div>
<div id="menu-bar" class="menu-bar sticky bordered">
<div class="left-buttons">
<button id="sidebar-toggle" class="icon-button" type="button" title="Toggle Table of Contents" aria-label="Toggle Table of Contents" aria-controls="sidebar">
<i class="fa fa-bars"></i>
</button>
<button id="theme-toggle" class="icon-button" type="button" title="Change theme" aria-label="Change theme" aria-haspopup="true" aria-expanded="false" aria-controls="theme-list">
<i class="fa fa-paint-brush"></i>
</button>
<ul id="theme-list" class="theme-popup" aria-label="Themes" role="menu">
<li role="none"><button role="menuitem" class="theme" id="light">Light (default)</button></li>
<li role="none"><button role="menuitem" class="theme" id="rust">Rust</button></li>
<li role="none"><button role="menuitem" class="theme" id="coal">Coal</button></li>
<li role="none"><button role="menuitem" class="theme" id="navy">Navy</button></li>
<li role="none"><button role="menuitem" class="theme" id="ayu">Ayu</button></li>
</ul>
</div>
<h1 class="menu-title">
Hitch Hiker Linux
</h1>
</div>
<!-- Apply ARIA attributes after the sidebar and the sidebar toggle button are added to the DOM -->
<script type="text/javascript">
document.getElementById('sidebar-toggle').setAttribute('aria-expanded', sidebar === 'visible');
document.getElementById('sidebar').setAttribute('aria-hidden', sidebar !== 'visible');
Array.from(document.querySelectorAll('#sidebar a')).forEach(function(link) {
link.setAttribute('tabIndex', sidebar === 'visible' ? 0 : -1);
});
</script>
<div id="content" class="content">
<main>
<p class="subtitle"><strong>2021-02-15</strong></p>
<pre style="background-color:#2b303b;">
<code class="language-bash" data-lang="bash"><span style="color:#65737e;">#!/don&#39;t/panic
</span><span style="color:#bf616a;">man</span><span style="color:#c0c5ce;"> 42
</span></code></pre>
<p>HitchHiker Linux is a very Unix-like distribution of Linux with a focus on
simplicity, elegance and providing a solid base that the end user can turn into
whatever they see fit.</p>
<h2 id="core-principles">Core Principles:</h2>
<ul>
<li>The default installation should include the bare minimum required software to provide a solid base.</li>
<li>Complexity should be discouraged in favor of code elegance, security and maintainability.</li>
<li>End users should not be discouraged from tinkering with their system.</li>
<li>The distribution should make as few assumptions as possible regarding end use.</li>
<li>While newer releases of software often eliminate bugs and vulnerabilities, newer software packages are not by default more secure than stable, mature packages (newer is not always better).</li>
<li>Any changes to the core system functionality, especially those which change expected functionality, must be well justified and well vetted before deployment.</li>
<li>The base installation should include everything required to rebuild itself from source.</li>
<li>The distribution should make as few changes to the upstream software as possible, delivering it as intended by the original author.</li>
<li>Patching of sources should only be done to fix bugs or vulnerabilities.</li>
</ul>
<p>HHL was born of a desire to harness the greater hardware support of Linux while
paying respect to the Unix systems from which Linux was born. The author was a
long time user of FreeBSD who migrated to Arch for several years, but has become
increasingly frustrated with Systemd, Gnome, RedHat and Ubuntu dominance. It is
believed that there is a need for a distribution that does not pander to ease of
use for casual users at the expense of putting up roadblocks for experienced Unix
veterans.</p>
<h2 id="architectures">Architectures</h2>
<p>HHL is running on the following processor architectures:</p>
<ul>
<li>x86</li>
<li>x86_64</li>
<li>armv7l</li>
<li>aarch64</li>
<li>riscv64</li>
</ul>
</main>
</div>
</div>
</div>
<script>
(function themes() {
var html = document.querySelector('html');
var themeToggleButton = document.getElementById('theme-toggle');
var themePopup = document.getElementById('theme-list');
var themeColorMetaTag = document.querySelector('meta[name="theme-color"]');
var stylesheets = {
ayuHighlight: document.querySelector("[href$='ayu-highlight.css']"),
tomorrowNight: document.querySelector("[href$='tomorrow-night.css']"),
highlight: document.querySelector("[href$='highlight.css']"),
};
function showThemes() {
themePopup.style.display = 'block';
themeToggleButton.setAttribute('aria-expanded', true);
themePopup.querySelector("button#" + get_theme()).focus();
}
function hideThemes() {
themePopup.style.display = 'none';
themeToggleButton.setAttribute('aria-expanded', false);
themeToggleButton.focus();
}
function get_theme() {
var theme;
try { theme = localStorage.getItem('mdbook-theme'); } catch (e) { }
if (theme === null || theme === undefined) {
return default_theme;
} else {
return theme;
}
}
function set_theme(theme, store = true) {
let ace_theme;
if (theme == 'coal' || theme == 'navy') {
stylesheets.ayuHighlight.disabled = true;
stylesheets.tomorrowNight.disabled = false;
stylesheets.highlight.disabled = true;
ace_theme = "ace/theme/tomorrow_night";
} else if (theme == 'ayu') {
stylesheets.ayuHighlight.disabled = false;
stylesheets.tomorrowNight.disabled = true;
stylesheets.highlight.disabled = true;
ace_theme = "ace/theme/tomorrow_night";
} else {
stylesheets.ayuHighlight.disabled = true;
stylesheets.tomorrowNight.disabled = true;
stylesheets.highlight.disabled = false;
ace_theme = "ace/theme/dawn";
}
setTimeout(function () {
themeColorMetaTag.content = getComputedStyle(document.body).backgroundColor;
}, 1);
if (window.ace && window.editors) {
window.editors.forEach(function (editor) {
editor.setTheme(ace_theme);
});
}
var previousTheme = get_theme();
if (store) {
try { localStorage.setItem('mdbook-theme', theme); } catch (e) { }
}
html.classList.remove(previousTheme);
html.classList.add(theme);
}
// Set theme
var theme = get_theme();
set_theme(theme, false);
themeToggleButton.addEventListener('click', function () {
if (themePopup.style.display === 'block') {
hideThemes();
} else {
showThemes();
}
});
themePopup.addEventListener('click', function (e) {
var theme = e.target.id || e.target.parentElement.id;
set_theme(theme);
});
themePopup.addEventListener('focusout', function(e) {
// e.relatedTarget is null in Safari and Firefox on macOS (see workaround below)
if (!!e.relatedTarget && !themeToggleButton.contains(e.relatedTarget) && !themePopup.contains(e.relatedTarget)) {
hideThemes();
}
});
// Should not be needed, but it works around an issue on macOS & iOS: https://github.com/rust-lang/mdBook/issues/628
document.addEventListener('click', function(e) {
if (themePopup.style.display === 'block' && !themeToggleButton.contains(e.target) && !themePopup.contains(e.target)) {
hideThemes();
}
});
document.addEventListener('keydown', function (e) {
if (e.altKey || e.ctrlKey || e.metaKey || e.shiftKey) { return; }
if (!themePopup.contains(e.target)) { return; }
switch (e.key) {
case 'Escape':
e.preventDefault();
hideThemes();
break;
case 'ArrowUp':
e.preventDefault();
var li = document.activeElement.parentElement;
if (li && li.previousElementSibling) {
li.previousElementSibling.querySelector('button').focus();
}
break;
case 'ArrowDown':
e.preventDefault();
var li = document.activeElement.parentElement;
if (li && li.nextElementSibling) {
li.nextElementSibling.querySelector('button').focus();
}
break;
case 'Home':
e.preventDefault();
themePopup.querySelector('li:first-child button').focus();
break;
case 'End':
e.preventDefault();
themePopup.querySelector('li:last-child button').focus();
break;
}
});
})();