<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
 <channel>
   <title>Solene'%</title>
   <description></description>
   <link>https://dataswamp.org/~solene/</link>
   <atom:link href="https://dataswamp.org/~solene/rss.xml" rel="self" type="application/rss+xml" />
   <item>
 <title>What is going on in Nix community?</title>
 <description>
   <![CDATA[
<pre># Introduction

You may have heard about issues within the Nix/NixOS community, this blog post will try to help you understand what is going on.

Please note that it is hard to get a grasp of the big picture, it is a more long term feeling that the project governance was wrong (or absent?) and people got tired.

This blog posts was written with my knowledge and feelings, I clearly do not represent the community.

=> https://save-nix-together.org/ Save Nix Together: an open letter to the NixOS foundation
=> https://xeiaso.net/blog/2024/much-ado-about-nothing/ Xe blog post: Much ado about nothing

There is a maintainer departure milestone in the Nixpkgs GitHub project.

=> https://github.com/NixOS/nixpkgs/milestone/27 GitHub milestone 27: Maintainers leaving

# Project structure

First, it is important to understand how the project works.

Nix (and NixOS, but it is not the core of the project), was developed by Eelco Dolstra early 2000.  The project is open source, available on GitHub and everyone can contribute.

Nix is a tool to handle packaging in a certain way, and it has another huge repository (top 10 GitHub repo) called nixpkgs that contains all packages definition.  nixpkgs is known to be the most up-to-date repository and biggest repository of packages, thanks to heavy automation and a huge community.

The NixOS foundation (that's the name of the entity managing the project) has a board that steer the project in some direction and handle questions.  First problem is that it is known to be slow to act and response.

Making huge changes to Nix or Nixpkgs requires making an RFC (Request For Comment), explaining the rationale behind a change and a consensus has to be found with others to agree (it is somewhat democratic).  Eelco decided a while ago to introduce a huge change in Nix (called Flakes) without going through the whole RFC process, this introduced a lot of tension and criticism because they should have gone through the process like every other people, and the feature is half-baked but got some traction and now Nix paradigm was split between two different modes that are not really compatible.

=> https://github.com/NixOS/rfcs/pull/49#issuecomment-659372623 GitHub Pull request to introduce Flakes: Eelco Dolstra mentioning they could merge it as experimental

There are also issues related to some sponsors in the Nix conferences, like companies related to militaries, but this is better  explained in the links above, so I will not make a recap.

# Company involvement

This point is what made me leave NixOS community.  I worked for a company called Tweag, involved into Nix for a while and paying people to contribute to Nix and Nixpkgs to improve the user experience for their client.  This made me realize the impact of companies into open source, and the more I got involved into this, the more I realized that Nix was mostly driven by companies paying developers to improve the tool for business.

Paying people to develop features or fixing bug is fine, but when a huge number of contributors are paid by companies, this lead to poor decisions and conflicts of interest.

In the current situation, Eelco Dolstra published a blog post to remember the project is open source and belong to its contributors.

=> https://determinate.systems/posts/on-community-in-nix/ Eelco Dolstra blog post

The thing in this blog post, that puzzles me, is that most people at Determinate Systems (Eelco co-founded company) are deeply involved into Nix in various way.  In this situation, it is complicated for contributors to separate what they want for the project from what their employer wants.  It is common for nix contributors to contribute with both hats.

# Conclusion

Unfortunately, I am not really surprised this is happening.  When a huge majority of people spending their free time contributing to a project they love and that companies relentlessly quiet their voice, it just can't work.

I hope Nix community will be able to sort this out and keep contributing to the project they love.  This is open source and libre software, most affected people contribute because they like doing so, they do not deserve what is happening, but it never came with any guarantees either.

# Extra: Why did I stop using Nix?

I don't think this deserved a dedicated blog post, so here are some words.

From my experience, contributing to Nix was complicated.  Sometimes, changes could be committed in minutes, leaving no time for other to review a change, and sometimes a PR could take months or years because of nitpicking and maintainer losing faith.

Another reason I stopped using nix was that it is quite easy to get nixpkgs commit access (I don't have commit access myself, I never wanted to inflict the nix language to myself), a supply chain attack would be easy to achieve in my opinion: there are so many commits done that it is impossible for a trustable group to review everything, and there are too many contributors to be sure they are all trustable.

# Alternative to Nix/NixOS?

If you do not like Nix/NixOS governance, it could be time to take a look at Guix, a Nix fork that happened in 2012.  It is a much smaller community than nix, but the tooling, packages set and community is not at rest.

Guix being a 100% libre software project, it does not target MacOS like nix, nor it will include/package proprietary software, however for the second "problem", there is an unofficial repository called guix-nonfree that contains many packages like firmware and proprietary software, most users will want to include this repo.

Guix is old school, people exchange over IRC and send git diff over email, please do not bother them if this is not your cup of tea.  On top of that, Guix uses the programming language Scheme (a Lisp-1 language) and if you want to work with this language, emacs is your best friend (try geiser mode!).

=> https://guix.gnu.org/ Guix official project webpage
</pre>
   ]]>
 </description>
 <guid>gopher://dataswamp.org:70/1/~solene/article-nix-internal-crisis</guid>
 <link>gopher://dataswamp.org:70/1/~solene/article-nix-internal-crisis</link>
 <pubDate>Sat, 27 Apr 2024 00:00:00 GMT</pubDate>
</item>
<item>
 <title>OpenBSD scripts to convert wg-quick VPN files</title>
 <description>
   <![CDATA[
<pre># Introduction

If you use commercial VPN, you may have noticed they all provide WireGuard configurations in the wg-quick format, this is not suitable for an easy use in OpenBSD.

As I currently work a lot for a VPN provider, I often have to play with configurations and I really needed a script to ease my work.

I made a shell script that turns a wg-quick configuration into a hostname.if compatible file, for a full integration into OpenBSD.  This is practical if you always want to connect to a given VPN server, not for temporary connections.

=> https://man.openbsd.org/hostname.if OpenBSD manual pages: hostname.if
=> https://git.sr.ht/~solene/wg-quick-to-hostname-if Sourcehut project: wg-quick-to-hostname-if

# Usage

It is really easy to use, download the script and mark it executable, then run it with your wg-quick configuration as a parameter, it will output the hostname.if file to the standard output.

```
wg-quick-to-hostname-if fr-wg-001.conf | doas tee /etc/hostname.wg0
```

In the generated file, it uses a trick to dynamically figure the current default route which is required to keep a non-vpn route to the VPN gateway.

# Short VPN sessions

When I shared my script on mastodon, Carlos Johnson shared their own script which is pretty cool and complementary to mine.

If you prefer to establish a VPN for a limited session, you may want to take a look at his script.

=> https://gist.github.com/callemo/aea83a8d0e1e09bb0d94ab85dc809675#file-wg-sh Carlos Johnson GitHub: file-wg-sh gist

# Prevent leaks

If you need your WireGuard VPN to be leakproof (= no network traffic should leave the network interface outside the VPN if it's not toward the VPN gateway), you should absolutely do the following:

* your WireGuard VPN should be on rdomain 0
* WireGuard VPN should be established on another rdomain
* use PF to block traffic on the other rdomain that is not toward the VPN gateway
* use the VPN provider DNS or a no-log public DNS provider

=> https://dataswamp.org/~solene/2021-10-09-openbsd-wireguard-exit.html Older blog post: WireGuard and rdomains

# Conclusion

OpenBSD's ability to configure WireGuard VPNs with ifconfig has always been an incredible feature, but it was not always fun to convert from wg-quick files.  But now, using a commercial VPN got a lot easier thanks to a few piece of shell.
</pre>
   ]]>
 </description>
 <guid>gopher://dataswamp.org:70/1/~solene/article-openbsd-wg-quick-converter</guid>
 <link>gopher://dataswamp.org:70/1/~solene/article-openbsd-wg-quick-converter</link>
 <pubDate>Mon, 29 Apr 2024 00:00:00 GMT</pubDate>
</item>
<item>
 <title>A Stateless Workstation</title>
 <description>
   <![CDATA[
<pre># Introduction

I always had an interest for practical security on computers, being workstations or servers.  Many kinds of threats exist for users and system administrators, it's up to them to define a threat model to know what is acceptable or not.  Nowadays, we have choice in the operating system land to pick what works best for that threat model: OpenBSD with its continuous security mechanisms, Linux with hardened flags (too bad grsec isn't free anymore), Qubes OS to keep everything separated, immutable operating system like Silverblue or MicroOS (in my opinion they don't bring much to the security table though) etc...

My threat model always had been the following: some exploit on my workstation remaining unnoticed almost forever, stealing data and capturing keyboard continuously.  This one would be particularly bad because I have access to many servers through SSH, like OpenBSD servers.  Protecting against that is particularly complicated, the best mitigations I found so far is to use Qubes OS with disposable VMs or restricting outbound network, but it's not practical.

My biggest grip with computers always have been "states".  What is a state?  It is something that distinguish a computer from another: installed software, configuration, data at rest (pictures, documents etc…).  We use states because we don't want to lose work, and we want our computers to hold our preferences.

But what if I could go stateless?  The best defense against data stealer is to own nothing, so let's go stateless!

# Going stateless

My idea is to be able to use any computer around, and be able to use it for productive work, but it should always start fresh: stateless.

A stateless productive workstation obviously has challenges: How would it help with regard to security? How would I manage passwords? How would I work on a file over time? How to achieve this?

I have been able to address each of these questions.  I am now using a stateless system.

> States? Where we are going, we don't need states! (certainly Doc Brown in a different timeline)

## Data storage

It is obvious that we need to keep files for most tasks.  This setup requires a way to store files on a remote server.

Here are different methods to store files:

* Nextcloud
* Seafile
* NFS / CIFS over VPN
* iSCSI over VPN
* sshfs / webdav mount
* Whatever works for you

Encryption could be done locally with tools like cryfs or gocryptfs, so only encrypted files would be stored on the remote server.

Nextcloud end-to-end encryption should not be used as of April 2024, it is known to be unreliable.

Seafile, a less known alternative to Nextcloud but focused only on file storage, supports end-to-end encryption and is reliable.  I chose this one as I had a good experience with it 10 years ago.

Having access to the data storage in a stateless environment comes with an issue: getting the credentials to access the files.  Passwords should be handled differently.

## Password management

When going stateless, the first step that will be required after a boot will be to access the password manager, otherwise one would be locked outside.

The passwords must be reachable from anywhere on Internet, with a passphrase you know and/or hardware token you have (and why not 2FA).

A self-hosted solution is vaultwarden (it used to be named bitwarden_rs), it's an open source reimplementation of Bitwarden server.

Any proprietary service offering password management could work too.

A keepassxc database on a remote storage service for which you know the password could also be used, but it is less practical.

## Security

The main driving force for this project is to increase my workstation security, I had to think hard about this part.

Going stateless requires a few changes compared to a regular workstation:

* data should be stored on a remote server
* passwords should be stored on a remote server
* a bootable live operating system
* programs to install

This is mostly a paradigm change with pros and cons compared to a regular workstation.

Data and passwords stored in the cloud?  This is not really an issue when using end-to-end encryption, this is true as long as the software is trustable and its code is correct.

A bootable live operating system is quite simply to acquire.  There is a ton of choice of Linux distributions able to boot from a CD or from USB, and also non Linux live system exist.  A bootable USB device could be compromised while a CD is an immutable media, but there are USB devices such as the Kanguru FlashBlu30 with a physical switch to make the device read-only.  A USB device could be removed immediately after the boot, making it safe.  As for physically protecting the USB device in case you would not trust it anymore, just buy a new USB memory stick and resilver it.

=> https://www.kanguru.com/products/kanguru-flashblu30-usb3-flash-drive Product page: Kanguru FlashBlu30

As for installed programs, it is fine as long as they are packaged and signed by the distribution, the risks are the same as for a regular workstation.

The system should be more secure than a typical workstation because:

* the system never have access to all data at once, user is supposed to only pick what they will need for a given task
* any malware that would succeed to reach the system would not persist to the next boot

The system would be less secure than a typical workstation because:

* remote servers could be exploited (or offline, not a security issue but…), this is why end-to-end encryption is a must

To circumvent this, I only have the password manager service reachable from the Internet, which then allows me to create a VPN to reach all my other services.

## Ecology

I think it is a dimension that deserves to be analyzed for such setup.  A stateless system requires remote servers to run, and use bandwidth to reinstall programs at each boot.  It is less ecological than a regular workstation, but at the same time it may also enforce some kind of rationalization of computer usage because it is a bit less practical.

## State of the art

Here is a list of setup that already exist which could provide a stateless experience, with support for either a custom configuration or a mechanism to store files (like SSH or GPG keys, but an USB smart card would be better for those):

* NixOS with impermanence, this is an installed OS, but almost everything on disk is volatile
* NixOS live-cd generated from a custom config
* Tails, comes with a mechanism to locally store encrypted files, privacy-oriented, not really what I need
* Alpine with LBU, comes with a mechanism to locally store encrypted files and cache applications
* FuguITA, comes with a mechanism to locally store encrypted files (OpenBSD based)
* Guix live-cd generated from a custom config
* Arch Linux generated live-cd
* Ubuntu live-cd, comes with a mechanism to retrieve files from a partition named "casper-rw"

Otherwise, any live system could just work.

Special bonus to NixOS and Guix generated live-cd as you can choose which software will be in there, in latest version.  Similar bonus with Alpine and LBU, packages are always installed from a local cache which mean you can update them.

A live-cd generated a few months ago is certainly not really up to date.

# My experience

I decided to go with Alpine with its LBU mechanism, it is not 100% stateless but hit the perfect spot between "I have to bootstrap everything from scratch" and "I can reduce the burden to a minimum".

=> https://dataswamp.org/~solene/2023-07-14-alpine-linux-from-ram-but-persistent.html Earlier blog post: Alpine Linux from RAM but persistent

My setup requires two USB memory stick:

* one with Alpine installer, upgrading to a newer Alpine version only requires me to write the new version on that stick
* a second to store the packages cache and some settings such as the package list and specific changes in /etc (user name, password, services)

While it is not 100% stateless, the files on the second memory stick are just a way to have a working customized Alpine.

This is a pretty cool setup, it boots really fast as all the packages are already in cache on the second memory stick (packages are signed, so it is safe).  I made a Firefox profile with settings and extensions, so it is always fresh and ready when I boot.

I decided to go with the following stack, entirely self-hosted:

* Vaultwarden for passwords
* Seafile for data (behind VPN)
* Nextcloud for calendar and contacts (behind VPN)
* Kanboard for task management (behind VPN)
* Linkding for bookmarks (behind VPN)
* WireGuard for VPN

This setup offered me freedom.  Now, I can bootstrap into my files and passwords from any computer (a trustable USB memory stick is advisable though!).

I can also boot using any kind of operating system on any on my computer, it became so easy it's refreshing.

I do not make use of dotfiles or stored configurations because I use vanilla settings for most programs, a git repository could be used to fetch all settings quickly though.

=> https://github.com/dani-garcia/vaultwarden Vaultwarden official project website
=> https://www.seafile.com/en/home/ Seafile official project website
=> https://nextcloud.com/ Nextcloud official project website
=> https://kanboard.org/ Kanboard official project website
=> https://github.com/sissbruecker/linkding Linkding official project website

# Backups

A tricky part with this setup is to proceed with serious backups.  The method will depend on the setup you chose.

With my self-hosted stack, restic makes a daily backup to two remote locations, but I should be able to reach the backup if my services are not available due to a server failure.

If you use proprietary services, it is likely they should handle backups, but it is better not to trust them blindly and checkout all your data on a regular schedule to make a proper backup.

# Conclusion

This is an interesting approach to workstations management, I needed to try.  I really like how it freed me from worrying about each workstation, they are now all disposable.

I made a mind map for this project, you can view it below, it may be useful to better understand how things articulate.

=> static/stateless_computing-fs8.png Stateless computing mind mapping document
</pre>
   ]]>
 </description>
 <guid>gopher://dataswamp.org:70/1/~solene/article-workstation-going-stateless</guid>
 <link>gopher://dataswamp.org:70/1/~solene/article-workstation-going-stateless</link>
 <pubDate>Tue, 23 Apr 2024 00:00:00 GMT</pubDate>
</item>
<item>
 <title>Lessons learned with XZ vulnerability</title>
 <description>
   <![CDATA[
<pre># Intro

Yesterday Red Hat announced that xz library was compromised badly, and could be use as a remote execution code vector.  It's still not clear exactly what's going on, but you can learn about this on the following GitHub discussion that also links to original posts:

=> https://gist.github.com/thesamesam/223949d5a074ebc3dce9ee78baad9e27 Discussion about xz being compromised

# What's the state?

As far as we currently know, xz-5.6.0 and xz-5.6.1 contains some really obfsucated code that would trigger only in sshd, this only happen in the case of:

* the system is running systemd
* openssh is compiled with a patch to add a feature related to systemd
* the system is using glibc (this is mandatory for systemd systems afaik anyway)
* xz package was built using release tarballs published on GitHub and not auto-generated tarballs, the malicious code is missing in the git repository

So far, it seems openSUSE Tumbleweed, Fedora 40 and 41 and Debian sid were affected and vulnerable.  Nobody knows what the vulnerability is doing exactly yet, when security researchers get their hands on it, we will know more.

OpenBSD, FreeBSD, NixOS and Qubes OS (dom0 + official templates) are unaffected.  I didn't check for other but Alpine and Guix shouldn't be vulnerable either.

=> https://security.gentoo.org/glsa/202403-04 Gentoo security advisory (unaffected)

# What lessons could we learn?

This is really unfortunate that a piece of software as important and harmless in appareance got compromised.  This made me think about how could we protect the most against this kind of issues, I came to the conclusion:

* packages should be built from source code repository instead of tarballs whenever possible (sometimes tarballs contain vendoring code which would be cumbersome to pull otherwise), at least we would know what to expect
* public network services that should be only used by known users (like openssh, imap server in small companies etc..) should be run behind a VPN
* OpenBSD style to have a base system developed as a whole by a single team is great, such kind of vulnerability is barely possible to happen (on base system only, ports aren't audited)
* whenever possible, separate each network service within their own operating system instance (using hardware machines, virtual machines or even containers)
* avoid daemons running as root as possible
* use opensnitch on workstations (linux only)
* control outgoing traffic whenever you can afford to

I don't have much opinion about what could be done to protect supply chain.  As a packager, it's not possible to audit code of each software we update.  My take on this is we have to deal with it, xz may certainly not be the only one vulnerable library running in production.

However, the risks could be reduced by:

* using less programs
* using less complex programs
* compiling programs with less options to pull in less dependencies (FreeBSD and Gentoo both provide this feature and it's great)

# Conclusion

I actually have two systems that were running the vulnerable libs on openSUSE MicroOS which updates very aggressively (daily update + daily reboot).  There are no magic balance between "update as soon as possible" and "wait for some people to take the risks first".

I'm going to rework my infrastructure and expose the bare minimum to the Internet, and use a VPN for all my services that are for known users.  The peace of mind will obtained be far greater than the burden of setting up WireGuard VPNs.
</pre>
   ]]>
 </description>
 <guid>gopher://dataswamp.org:70/1/~solene/article-lessons-learned-xz-vuln</guid>
 <link>gopher://dataswamp.org:70/1/~solene/article-lessons-learned-xz-vuln</link>
 <pubDate>Sat, 30 Mar 2024 00:00:00 GMT</pubDate>
</item>
<item>
 <title>Cloud gaming review using Playstation Plus</title>
 <description>
   <![CDATA[
<pre># Introduction

While testing the cloud gaming service GeForce Now, I've learned that PlayStation also had an offer.

Basically, if you use a PlayStation 4 or 5, you can subscribe to the first two tiers to benefit some services and games library, but the last tier (premium) adds more content AND allows you to play video games on a computer with their client, no PlayStation required.  I already had the second tier subscription, so I paid the small extra to switch to premium in order to experiment with the service.

=> https://www.playstation.com/en-us/ps-plus/ PlayStation Plus official website

# Game library

Compared to GeForce Now, while you are subscribed you have a huge game library at hand.  This makes the service a lot cheaper if you are happy with the content.  The service costs 160$€ / year if you take for 12 months, this is roughly the price of 2 AAA games nowadays...

# Streaming service

The service is only available using the PlayStation Plus Windows program.  It's possible to install it on Linux, but it will use more CPU because hardware decoding doesn't seem to work on Wine (even wine-staging with vaapi compatibility checked).

There are no clients for Android, and you can't use it in a web browser.  The Xbox Game Pass streaming and GeForce now services have all of that.

Sadness will start here.  The service is super promising, but the application is currently a joke.

If you don't plug a PS4 controller (named a dualshock 4), you can't use the "touchpad" button, which is mandatory to start a game in Tales of Arise, or very important in many games.  If you have a different controller, on Windows you can use the program "DualShock 4 emulator" to emulate it, on Linux it's impossible to use, even with a genuine controller.

A PS5 controller (dualsense) is NOT compatible with the program, the touchpad won't work.

=> https://github.com/r57zone/DualShock4-emulator DualShock4 emulator GitHub project page

Obviously, you can't play without a controller, except if you use a program to map your keyboard/mouse to a fake controller.

# Gaming quality

There are absolutely no settings in the application, you can run a game just by clicking on it, did I mention there are no way to search for a game?

I guess games are started in 720p, but I'm not sure, putting the application full screen didn't degrade the quality, so maybe it's 1080p but doesn't go full screen when you run it...

Frame rate... this sucks.  Games seem to run on a PS4 fat, not a PS4 pro that would allow 60 fps.  On most games you are stuck with 30 fps and an insane input lag.  I've not been able to cope with AAA games like God of War or Watch Dogs Legion as it was horrible.

Independent games like Alex Kidd remaster, Monster Boy or Rain World did feel very smooth though (60fps!), so it's really an issue with the hardware used to run the games.

Don't expect any PS5 games in streaming from Windows, there are none.

The service allows PlayStation users to play all games from the library (including PS5 games) in streaming up to 2160p@120fps, but not the application users.  This feature is only useful if you want to try a game before installing it, or if your PlayStation storage is full.

# Cloud saving

This is fun here too.  There are game saves in the PlayStation Plus program cloud, but if you also play on a PlayStation, their saves are sent to a different storage than the PlayStation cloud saves.

There is a horrible menu to copy saves from one pool to the other.

This is not an issue if you only use the stream application or the PlayStation, but it gets very hard to figure where is your save if you play on both.

# Conclusion

I have been highly disappointed by the streaming service (outside PlayStation use).  The Windows programs required to sign in twice before working (I tried on 5 devices!), most interesting games run poorly due to a PS4 hardware, there is no way to enable the performance mode that was added to many games to support the PS4 Pro.  This is pretty curious as the streaming from a PlayStation device is a stellar experience, it's super smooth, high quality, no input lag, no waiting, crystal clear picture.

No Android application? Curious...  No support for a genuine PS5 controller, WTF?

The service is still young, I really hope they will work at improving the streaming ecosystem.

At least, it works reliably and pretty well for simpler games.

It could be a fantastic service if the following requirements were met:

* proper hardware to run games at 60fps
* greater controller support
* allow playing in a web browser, or at least allow people to run it on smartphones with a native application
* an open source client while there
* merged cloud saves
</pre>
   ]]>
 </description>
 <guid>gopher://dataswamp.org:70/1/~solene/article-playstation-plus-streaming-review</guid>
 <link>gopher://dataswamp.org:70/1/~solene/article-playstation-plus-streaming-review</link>
 <pubDate>Sat, 16 Mar 2024 00:00:00 GMT</pubDate>
</item>
<item>
 <title>Cloud gaming review using Geforce Now</title>
 <description>
   <![CDATA[
<pre># Introduction

I'm finally done with ADSL now as I got access to optical fiber last week!  It was time for me to try cloud gaming again and see how it improved since my last use in 2016.

If you are not familiar with cloud gaming, please do not run away, here is a brief description.  Cloud gaming refers to a service allowing one to play locally a game running on a remote machine (either locally or over the Internet).

There are a few commercial services available, mainly: GeForce Now, PlayStation Plus Premium (other tiers don't have streaming), Xbox game pass Ultimate and Amazon Luna.  Two major services died in the long run: Google Stadia and Shadow (which is back now with a different formula).

A note on Shadow, they are now offering access to an entire computer running Windows, and you do what you want with it, which is a bit different from other "gaming" services listed above.  It's expensive, but not more than renting an AWS system with equivalent specs (I know some people doing that for gaming).

This article is about the service Nvidia GeForce Now (not sponsored, just to be clear).

I tried the free tier, premium tier and ultimate tier (thanks to people supporting me on Patreon, I could afford the price for this review).

=> https://www.nvidia.com/en-us/geforce-now/ Geforce Now official page

=> https://play.geforcenow.com/mall/ Geforce Now page where you play (not easy to figure after a login)

# The service

This is the first service I tried in 2016 when I received an Nvidia Shield HTPC, the experience was quite solid back in the days.  But is it good in 2024?

The answer is clear, yes, it's good, but it has limitations you need to be aware of.  The free tier allows playing for a maximum of 1 hour in a single session, and with a waiting queue that can be fast (< 1 minute) or long (> 15 minutes), but the average waiting time I had was like 9 minutes.  The waiting queue also displays ads now.

The premium tier at 11€$/month removes the queue system by giving you priority over free users, always assigns an RTX card and allows playing up to 6 hours in a single session (you just need to start a new session if you want to continue).

Finally, the ultimate tier costs 22€$/month and allows you to play in 4K@120fps on a RTX 4080, up to 8h.

The tiers are quite good in my opinion, you can try and use the service for free to check if it works for you, then the premium tier is affordable to be used regularly.  The ultimate tier will only be useful to advanced gamers who need 4K, or higher frame rates.

Nvidia just released a new offer early March 2024, a premium daily pass for $3.99 or ultimate daily pass for 8€.  This is useful if you want to evaluate a tier before deciding if you pay for 6 months.  You will understand later why this daily pass can be useful compared to buying a full month.

# Operating system support

I tried the service using a Steam Deck, a Linux computer over Wi-Fi and Ethernet, a Windows computer over Ethernet and in a VM on Qubes OS.  The latency and quality were very different.

If you play in a web browser (Chrome based, Edge, Safari), make sure it supports hardware acceleration video decoding, this is the default for Windows but a huge struggle on Linux, Chrome/Chromium support is recent and can be enabled using `chromium --enable-features=VaapiVideoDecodeLinuxGL --use-gl=angle`.  There is a Linux Electron App, but it does nothing more than bundling the web page in chromium, without acceleration.

On a web browser, the codec used is limited to h264 which does not work great with dark areas, it is less effective than advanced codecs like av1 or hevc (commonly known as h265).  If you web browser can't handle the stream, it will lose packets and then Geforce service will instantly reduce the quality until you do not lose packets, which will make things very ugly until it recover, until it drops again.  Using hardware acceleration solves the problem almost entirely!

Web browser clients are also limited to 60 fps (so ultimate tier is useless), and Windows web browsers can support 1440p but no more.

On Windows and Android you can install a native Geforce Now application, and it has a LOT more features than in-browser.  You can enable Nvidia reflex to remove any input lag, HDR for compatible screens, 4K resolution, 120 fps frame rate etc...  There is also a feature to add color filters for whatever reason...  The native program used AV1 (I only tried with the ultimate tier), games were smooth with stellar quality and not using more bandwidth than in h264 at 60 fps.

I took a screenshot while playing Baldur's Gate 3 on different systems, you can compare the quality:

=> static/geforce_now/windows_steam_120fps_natif.png Playing on Steam native program, game set to maximum quality
=> static/geforce_now/windows_av1_120fps_natif_sansupscale_gamma_OK.png Playing on Geforce Now on Windows native app, game set to maximum quality
=> static/geforce_now/linux_60fps_chrome_acceleration_maxquality_gammaok.png Playing on Geforce Now on Linux with hardware acceleration, game set to maximum quality

In my opinion, the best looking one is surprisingly the Geforce Now on Windows, then the native run on Steam and finally on Linux where it's still acceptable.  You can see a huge difference in terms of quality in the icons in the bottom bar.

# Tier system

When I upgraded from free to premium tier, I paid for 1 month and was instantly able to use the service as a premium user.

Premium gives you priority in the queues, I saw the queue display a few times for a few seconds, so there is virtually no queue, and you can play for 6 hours in a row.

When I upgraded from premium to ultimate tier, I was expecting to pay the price difference between my current subscription and the new one, but it was totally different.  I had to pay for a whole month of ultimate tier, and my current remaining tier was converted as an ultimate tier, but as ultimate costs a bit more than twice premium, a pro rata was applied to the premium time, resulting in something like 12 extra days of ultimate for the premium month.

Ultimate tier allows reaching a 4K resolution and 120 fps refresh rate, allow saving video settings in games, so you don't have to tweak them every time you play, and provide an Nvidia 4080 for every session, so you can always set the graphics settings to maximum.  You can also play up to 8 hours in a row.  Additionaly, you can record gaming sessions or the past n minutes, there is a dedicated panel using Ctrl+G.  It's possible to achieve 240 fps for compatible monitors, but only for 1080p resolution.

Due to the tier upgrade method, the ultimate pass can be interesting, if you had 6 months of premium, you certainly don't want to convert it into 2 months of ultimate + paying 1 month of ultimate just to try.

# Gaming quality

As a gamer, I'm highly sensitive to latency, and local streaming has always felt poor with regard to latency, and I've been very surprised to see I can play an FPS game with a mouse on cloud gaming.  I had a ping of 8-75 ms with the streaming servers, which was really OK.  Games featuring "Nvidia reflex" have no sensitive input lag, this is almost magic.

When using a proper client (native Windows client or a web browser with hardware acceleration), the quality was good, input lag barely noticeable (none in the app), it made me very happy :-)

Using the free tier, I always had a rig good enough to put the graphics quality on High or Ultra, which surprised me for a free service.  On premium and later, I had an Nvidia 2080 minimum which is still relevant nowadays.

The service can handle multiple controllers!  You can use any kind of controller, and even mix Xbox / PlayStation / Nintendo controllers, no specific hardware required here.  This is pretty cool as I can visit my siblings, bring controllers and play together on their computer <3.

Another interesting benefit is that you can switch your gaming session from a device to another by connecting with the other device while already playing, Geforce Now will switch to the new connecting device without interruption.

# Games library

This is where GeForce now is pretty cool, you don't need to buy games to them.  You can import your own libraries like Steam, Ubisoft, Epic store, GOG (only CD Projekt Red games) or Xbox Game Pass games.  Not all games from your libraries will be playable though!  And for some reasons, some games are only available when run from Windows (native app or web browser), like Genshin Impact which won't appear in the games list if connected from non-Windows client?!

If you already own games (don't forget to claim weekly free Epic store games), you can play most of them on GeForce Now, and thanks to cloud saves, you can sync progression between sessions or with a local computer.

There are a bunch of free-to-play games that are good (like Warframe, Genshin Impact, some MMOs), so you could enjoy playing video games without having to buy one (until you get bored?).

# Cost efficiency

If you don't currently own a modern gaming computer, and you subscribe to the premium tier (9.17 $€/month when signing for 6 months), this costs you 110 $€ / year.

Given an equivalent GPU costs at least 400€$ and could cope with games in High quality for 3 years (I'm optimistic), the GPU alone costs more than subscribing to the service. Of course, a local GPU can be used for data processing nowadays, or could be sold second hand, or be used for many years on old games.

If you add the whole computer around the GPU, renewed every 5 or 6 years (we are targeting to play modern games in high quality here!), you can add 1200 $€ / 5 years (or 240 $€ / year).

When using the ultimate tier, you instantly get access to the best GPU available (currently a Geforce 4080, retail value of 1300€$).  Cost wise, this is impossible to beat with owned hardware.

I did some math to figure how much money you can save from electricity saving: the average gaming rig draws approximately 350 Watts when playing, a Geforce now thin client and a monitor would use 100 Watts in the worst case scenario (a laptop alone would be more around 35 Watts).  So, you save 0.25 kWh per hour of gaming, if one plays 100 hours per month (that's 20 days playing 5h, or 3.33 hours / day) they would save 25 kWh.  The official rate in France is 0.25 € / kWh, that would result in a 6.25€ saving in electricity.  The monthly subscription is immediately less expensive when taking this into account.  Obviously, if you are playing less, the savings are less important.

# Bandwidth usage and ecology

Most of the time, the streaming was using between 3 and 4 MB/s for a 1080p@60fps (full-hd resolution, 1920x1080, at 60 frames per second) in automatic quality mode.  Playing at 30 fps or on smaller resolutions will use drastically less bandwidth.  I've been able to play in 1080p@30 on my old ADSL line! (quality was degraded, but good enough).  Playing at 120 fps slightly increased the bandwidth usage by 1 MB/s.

I remember a long tech article about ecology and cloud gaming which concluded cloud gaming is more "eco-friendly" than running locally if you play it less than a dozen hours.  However, it always assumed you had a capable gaming computer locally that was already there, whether you use the cloud gaming or not, which is a huge bias in my opinion.  It also didn't account that one may install a video games multiple times and that a single game now weights 100 GB (which is equivalent to 20h of cloud gaming bandwidth wise!). The biggest cons was the bandwidth requirements and the whole worldwide maintenance to keep high speed lines for everyone.  I do think Cloud gaming is way more effective as it allows pooling gaming devices instead of having everyone with their own hardware.

As a comparison, 4K streaming at Netflix uses 25 Mbps of network (~ 3.1 MB/s).

# Playing on Android

Geforce Now allows you to play any compatible game on Android, is it worth?  I tried it with a Bluetooth controller on my BQ Aquaris X running LineageOS (it's a 7 years old phone, average specs with a 720p screen).

I was able to play in Wi-Fi using the 5 GHz network, it felt perfect except that I had to put the smartphone screen in a comfortable way.  This was drawing the battery at a rate of 0.7% / minute, but this is an old phone, I expect newer hardware to do better.

On 4G, the battery usage was less than Wi-Fi with 0.5% / minute.  The service at 720p@60fps used an average of 1.2 MB/s of data for a gaming session of Monster Hunter world.  At this rate, you can expect a data usage of 4.3 GB / hour of gameplay, which could be a lot or cheap depending on your usage and mobile subscription.

Globally, playing on Android was very good, but only if you have a controller.  There are interesting folding controllers that sandwich the smartphone between two parts, turning it into something looking like a Nintendo Switch, this can be a very interesting device for players.

# Tips

You can use "Ctrl+G" to change settings while in game or also display information about the streaming.

In GeForce Now settings (not in-game), you can choose the servers location if you want to try a different datacenter.  I set to choose the nearest otherwise I could land on a remote one with a bad ping.

GeForce Now even works on OpenBSD or Qubes OS qubes (more on that later on Qubes OS forum!).

=> https://forum.qubes-os.org/t/cloud-gaming-with-geforce-now/24964 Qubes OS forum discussion

# Conclusion

GeForce Now is a pretty neat service, the free tier is good enough for occasional gamers who would play once in a while for a short session, but also provide a cheaper alternative than having to keep a gaming rig up-to-date.  I really like that they allow me to use my own library instead of having to buy games on their own store.

I'm preparing another blog post about local and self hosted cloud gaming, and I have to admit I haven't been able to do better than Geforce Now even on local network...  Engineers at Geforce Now certainly know their stuff!

The experience was solid even on a 10 years old laptop, and enjoyable.  A "cool" feature when playing is the surrounding silence, as no CPU/GPU are crunching for rendering!  My GPU is still capable to handle modern games at an average quality at 60 FPS, I may consider using the premium tier in the future instead of replacing my GPU.
</pre>
   ]]>
 </description>
 <guid>gopher://dataswamp.org:70/1/~solene/article-geforce-now-review</guid>
 <link>gopher://dataswamp.org:70/1/~solene/article-geforce-now-review</link>
 <pubDate>Sat, 09 Mar 2024 00:00:00 GMT</pubDate>
</item>
<item>
 <title>Script NAT on Qubes OS</title>
 <description>
   <![CDATA[
<pre># Introduction

As a daily Qubes OS user, I often feel the need to expose a port of a given qube to my local network.  However, the process is quite painful because it requires doing the NAT rules on each layer (usually net-vm => sys-firewall => qube), it's a lost of wasted time.

I wrote a simple script that should be used from dom0 that does all the job: opening the ports on the qube, and for each NetVM, open and redirect the ports.

=> https://git.sr.ht/~solene/qubes-os-nat Qubes OS Nat git repository

# Usage

It's quite simple to use, the hardest part will be to remember how to copy it to dom0 (download it in a qube and use `qvm-run --pass-io` from dom0 to retrieve it).

Make the script executable with `chmod +x nat.sh`, now if you want to redirect the port 443 of a qube, you can run `./nat.sh qube 443 tcp`. That's all.

Be careful, the changes ARE NOT persistent. This is on purpose, if you want to always expose ports of a qube to your network, you should script its netvm accordingly.

# Limitations

The script is not altering the firewall rules handled by `qvm-firewall`, it only opens the ports and redirect them (this happens at a different level).  This can be cumbersome for some users, but I decided to not touch rules that are hard-coded by users in order to not break any expectations.

Running the script should not break anything.  It works for me, but it was only slightly tested though.

# Some useful ports

## Avahi daemon port

The avahi daemon uses the UDP port 5353.  You need this port to discover devices on a network.  This can be particularly useful to find network printers or scanners and use them in a dedicated qube.

# Evolutions

It could be possible to use this script in qubes-rpc, this would allow any qube to ask for a port forwarding.  I was going to write it this way at first, but then I thought it may be a bad idea to allow a qube to run a dom0 script as root that requires reading some untrusted inputs, but your mileage may vary.</pre>
   ]]>
 </description>
 <guid>gopher://dataswamp.org:70/1/~solene/article-qubes-os-nat</guid>
 <link>gopher://dataswamp.org:70/1/~solene/article-qubes-os-nat</link>
 <pubDate>Sat, 09 Mar 2024 00:00:00 GMT</pubDate>
</item>
<item>
 <title>Some OpenBSD features that aren't widely known</title>
 <description>
   <![CDATA[
<pre># Introduction

In this blog post, you will learn about some OpenBSD features that can be useful, but not widespread.

They often have a niche usage, but it's important to know they exist to prevent you from reinventing the wheel :)

=> https://www.openbsd.org OpenBSD official project website

# Features

The following list of features are not all OpenBSD specific as some can be found on other BSD systems.  Most of the knowledge will not be useful to Linux users.

## Secure level

The secure level is a sysctl named `kern.securelevel`, it has 4 different values from level -1 to level 2, and it's only possible to increase the level.  By default, the system enters the secure level 1 when in multi-user (the default when booting a regular installation).

It's then possible to escalate to the last secure level (2), which will enable the following extra security:

* all raw disks are read-only, so it's not possible to try to make a change to the storage devices
* the time is almost lock, it's only possible to modify the clock slowly by small steps (maybe 1 second max every so often)
* the PF firewall rules can't be modified, flushed or altered

This feature is mostly useful for dedicated firewall with rules that rarely change.  Preventing the time to change is really useful for remote logging as it allows being sure of "when" things happened, and you can be assured the past logs weren't modified.

The default security level 1 already enable some extra security like "immutable" and "append-only" file flags can't be removed, these overlooked flags (that can be applied with chflags) can lock down files to prevent anyone from modifying them.  The append-only flag is really useful for logs because you can't modify the content, but this doesn't prevent adding new content, history can't be modified this way.

=> https://man.openbsd.org/securelevel OpenBSD manual pages: securelevel
=> https://man.openbsd.org/chflags OpenBSD manual pages: chflags

This feature exists in other BSD systems.

## Memory allocator extra checks

OpenBSD's memory allocator can be tweaked, system-wide or per command, to add extra checks.  This could be either used for security reasons or to look for memory allocation related bugs in a program (this is VERY common...).

There are two methods to apply the changes:

* system-wide by using the sysctl `vm.malloc_conf`, either immediately with the sysctl command, or at boot in `/etc/sysctl.conf` (make sure you quote its value there, some characters such as `>` will create troubles otherwise, been there...)
* on the command line by prepending `env MALLOC_OPTIONS="flags" program_to_run`

The man page gives a list of flags to use as option, the easiest to use is `S` (for security checks).  It is stated in the man page that a program misbehaving with any flag other than X is buggy, so it's not YOUR fault if you use malloc options and the program is crashing (except if you wrote the code ;-) ).

=> https://man.openbsd.org/malloc OpenBSD manual pages: malloc (search for MALLOC OPTIONS)

## File flags

You are certainly used to files attributes like permissions or ownership, but on many file systems (including OpenBSD ffs), there are flags as well!

The file flags can be altered with the command `chflags`, there are a couple of flags available:

* nodump: prevent the files from being saved by the command `dump` (except if you use a flag in dump to bypass this)
* sappnd: the file can only be used in writing append mode, only root can set / remove this flag
* schg: the file can not be change, it becomes immutable, only root can alter this flag
* uappnd: same as sappnd mode but the user can alter the flag
* uchg: same as schg mode but the user can alter the flag

As explained in the secure level section above, in the secure level 1 (default !), the flags sappnd and schg can't be removed, you would need to boot in single user mode to remove these flags.

Tip: remove the flags on a file with `chflags 0 file [...]`

You can check the flags on files using `ls -ol`, this would look like this:

```
terra$ chflags uchg get_extra_users.sh
terra$ ls -lo get_extra_users.sh
-rwxr-xr-x  1 solene  solene  uchg 749 Apr  3  2023 get_extra_users.sh

terra$ chflags 0 get_extra_users.sh
terra$ ls -lo get_extra_users.sh
-rwxr-xr-x  1 solene  solene  - 749 Apr  3  2023 get_extra_users.sh
```

=> https://man.openbsd.org/chflags OpenBSD manual pages: chflags

## Crontab extra parameters

OpenBSD crontab format received a few neat additions over the last years.

* random number for time field: you can use `~` in a field instead of a number or `*` to generate a random value that will remain stable until the crontab is reloaded.  Things like `~/5` work.  You can force the random value within a range with `20~40` to get values between 20 and 40.
* only send an email if the return code isn't 0 for the cron job: add `-n` between the time and the command, like in `0 * * * * -n /bin/something`.
* only run one instance of a job at a time: add `-s` between the time and the command, like in `* * * * * -s /bin/something`.  This is incredibly useful for cron job that shouldn't be running twice in parallel, if the job duration is longer than usual, you are ensured it will never start a new instance until the previous one is done.
* no logging: add `-q` between the time and the command, like in `* * * * -q /bin/something`, the effect will be that this cron job will not be logged in `/var/cron/log`.

It's possible to use a combination of flags like `-ns`.  The random time is useful when you have multiple systems, and you don't want them to all run a command at the same time, like in a case they would trigger a huge I/O on a remote server.  This was created to prevent the usual `0 * * * * sleep $(( $RANDOM % 3600 )) && something` that would run a sleep command for a random time up to an hour before running a command.

=> https://man.openbsd.org/crontab.5 OpenBSD manual pages: crontab

## Auto installing media

One cool feature on OpenBSD is the ability to easily create an installation media with pre-configured answers.  This is done by injecting a specific file in the `bsd.rd` install kernel.

There is a simple tool named upobsd that was created by semarie@ to easily modify such bsd.rd file to include the autoinstall file, I forked the project to continue its maintenance.

In addition to automatically installing OpenBSD with users, ssh configuration, sets to install etc...  it's also possible to add a site.tgz archive along with the usual sets archives that includes files you want to add to the system, this can include a script to run at first boot to trigger some automation!

These features are a must-have if you run OpenBSD in production, and you have many of them to manage, enrolling a new device to the fleet should be automated as possible.

=> https://github.com/rapenne-s/upobsd GitHub project page: upobsd
=> https://man.openbsd.org/autoinstall OpenBSD manual pages: autoinstall

## apmd daemon hooks

Apmd is certainly running on most OpenBSD laptop and desktop around, but it has features that aren't related to its command line flags, so you may have missed them.

There are different file names that can contain a script to be run upon some event such as suspend, resume, hibernate etc...

A classic usage is to run `xlock` in one's X session on suspend, so the system will require a password on resume.

=> https://dataswamp.org/~solene/2021-07-30-openbsd-xidle-xlock.html#_Resume_/_Suspend_case Older blog post: xlock from apmd suspend script

The man page explains all, but basically this works like this for running a backup program when you connect your laptop to the power plug:

```shell
# mkdir -p /etc/apm
# vi /etc/apm/powerup
```

You need to write a regular script:

```shell
#!/bin/sh

/usr/local/bin/my_backup_script
```

Then, make it executable

```shell
# chmod +x /etc/apm/powerup
```

The daemon apmd will automatically run this script when you connect a system back to AC power.

The method is the same for:

* hibernate
* resume
* suspend
* standby
* hibernate
* powerup
* powerdown

This makes it very easy to schedule tasks on such events.

=> https://man.openbsd.org/apmd#FILES OpenBSD manual page: apmd (section FILES)

## Using hotplugd for hooks on devices events

A bit similar to what apmd by running a script upon events, hotplugd is a service that allow running a script when a device is added / removed.

A typical use is to automatically mount an USB memory stick when plugged in the system, or start cups daemon when powering on your USB printer.

The script receives two parameters that represents the device class and device name, so you can use them in your script to know what was connected.  The example provided in the man page is a good starting point.

The scripts aren't really straightforward to write, you need to make a precise list of hardware you expect and what to run for each, and don't forget to skip unknown hardware.  Don't forget to make the scripts executable, otherwise it won't work.

=> https://man.openbsd.org/hotplugd OpenBSD manual page: hotplugd

## Altroot

Finally, there is a feature that looks pretty cool. In the daily script, if an OpenBSD partition `/altroot/` exists in `/etc/fstab` and the daily script environment has a variable `ROOTBACKUP=1`, the root partition will be duplicated to it.  This permit keeping an extra root partition in sync with the main root partition.  Obviously, it's more useful if the altroot partition is on another drive.  The duplication is done with `dd`.  You can look at the exact code by checking the script `/etc/daily`.

However, it's not clear how to boot from this partition if you didn't install a bootloader or created an EFI partition on the disk...

=> https://man.openbsd.org/hier OpenBSD manual pages: hier (hier stands for file system hierarchy)
=> https://man.openbsd.org/daily OpenBSD manual pages: daily
=> https://www.openbsd.org/faq/faq14.html#altroot OpenBSD FAQ: Root partition backup

## talk: local chat in the terminal

OpenBSD comes with a program named "talk", this creates a 1 to 1 chat with another user, either on the local system or a remote one (setup is more complicated).  This is not asynchronous, the two users must be logged in the system to use `talk`.

This program isn't OpenBSD specific and can be used on Linux as well, but it's so fun, effective and easy to setup I wanted to write about it.

The setup is easy:

```shell
# echo "ntalk           dgram   udp     wait    root    /usr/libexec/ntalkd     ntalkd" >> /etc/inetd.conf
# rcctl enable inetd
# rcctl start inetd
```

The communication happens on localhost on UDP ports 517 and 518, don't open them to the Internet!  If you want to allow a remote system, use a VPN to encrypt the traffic and allow ports 517/518 only for the VPN.

The usage is simple, if you want alice and bob to talk to each other:

* alice type `talk bob`, and bob must be logged in as well
* bob receives a message in their terminal that alice wants to talk
* bob type `talk alice`
* a terminal UI appears for both users, what they write will appear on the top half of the UI, and the messages from recipient will appear on the half bottom

This is a bit archaic, but it works fine and comes with the base system.  It does the job when you just want to speak to someone.

# Conclusion

There are interesting features on OpenBSD that I wanted to highlight a bit, maybe you will find them useful.  If you know cool features that could be added to this list, please reach me!
</pre>
   ]]>
 </description>
 <guid>gopher://dataswamp.org:70/1/~solene/article-rarely-known-openbsd-features</guid>
 <link>gopher://dataswamp.org:70/1/~solene/article-rarely-known-openbsd-features</link>
 <pubDate>Sat, 24 Feb 2024 00:00:00 GMT</pubDate>
</item>
<item>
 <title>Mounting video ram on Linux</title>
 <description>
   <![CDATA[
<pre># Introduction

Hi, did you ever wonder if you could use your GPU memory as a mount point, like one does with tmpfs and RAM?

Well, there is a project named vramfs that allows you to do exactly this on FUSE compatible operating system.

In this test, I used an NVIDIA GTX 1060 6GB in an external GPU case connected with a thunderbolt cable to a Lenovo T470 laptop running Gentoo.

=> https://github.com/Overv/vramfs vramfs official GitHub project page

# Setup

Install the dependencies, you need a C++ compiler and OpenCL headers for C++ (the package name usually contains "clhpp").

Download the sources, either with git or using an archive.

Run `make` and you should obtain a binary in `bin/vramfs`.

# Usage

It's pretty straightforward to use, as root, run `vramfs /mountpoint 3G` to mount a 3 GB storage on `/mountpoint`.

The program will stay in foreground, use Ctrl+C to unmount and stop the mount point.

# Speed test

I've been doing a simple speed test using `dd` to measure the write speed compare to a tmpfs.

The vramfs mount point was able to achieve 971 MB/s, it was CPU bound by the FUSE program because FUSE isn't very efficient compared to a kernel module handling a file system.

```
t470 /mnt/vram # env LC_ALL=C dd if=/dev/zero of=here.disk bs=64k count=30000
30000+0 records in
30000+0 records out
1966080000 bytes (2.0 GB, 1.8 GiB) copied, 2.02388 s, 971 MB/s
```

Meanwhile, the good old tmpfs reached 3.2 GB/s without using much CPU, this is a clear winner.

```
t470 /mnt/tmpfs # env LC_ALL=C dd if=/dev/zero of=here.disk bs=64k count=30000
30000+0 records in
30000+0 records out
1966080000 bytes (2.0 GB, 1.8 GiB) copied, 0.611312 s, 3.2 GB/s
```

# Limitations

I tried to use the vram mount point as a temporary directory for portage (the Gentoo tool building packages), but it didn't work due to an error.  After this error, I had to umount and recreate the mount point otherwise I was left with an irremovable directory.  There are bugs in vramfs, no doubts here :-)

Arch Linux wiki has a guide explaining how to use vramfs to store a swap file, but it seems to be risky for the system stability.

=> https://wiki.archlinux.org/title/Swap_on_video_RAM#FUSE_filesystem ArchWiki: Swap on video

# Conclusion

It's pretty cool to know that on Linux you can do almost what you want, even store data in your GPU memory.

However, I'm still trying to figure a real use case for vramfs except that it's pretty cool and impressive.  If you figure a useful situation, please let me know.
</pre>
   ]]>
 </description>
 <guid>gopher://dataswamp.org:70/1/~solene/article-mount-vram-on-linux</guid>
 <link>gopher://dataswamp.org:70/1/~solene/article-mount-vram-on-linux</link>
 <pubDate>Mon, 12 Feb 2024 00:00:00 GMT</pubDate>
</item>
<item>
 <title>Hosting Shaarli on OpenBSD</title>
 <description>
   <![CDATA[
<pre># Introduction

This guide explains how to install the PHP web service Shaarli on OpenBSD.

Shaarli is a bookmarking service and RSS feed reader, you can easily add new links and associate a text / tag and share it with other or keep each entry private if you prefer.

=> https://github.com/shaarli/Shaarli Shaarli GitHub Project page

# Setup

The software is pretty easy to install using base system httpd and PHP (included latest version available as of time of writing).

## Deploy Shaarli

Download the latest version of Shaarli available on their GitHub project.

=> https://github.com/shaarli/Shaarli/releases Shaarli releases on GitHub

Extract the archive and move the directory `Shaarli` in `/var/www/`.

Change the owner of the following directories to the user `www`.  It's required for Shaarli to work properly.  For security’s sake, don't chown all the files to Shaarli, it's safer when a program can't modify itself.

```
chown www /var/www/Shaarli/{cache,data,pagecache,tmp}
```

## Install the packages

We need a few packages to make it work, I'm using php 8.3 in the example, but you can replace with the current version you want:

```
pkg_add php--%8.3 php-curl--%8.3 php-gd--%8.3 php-intl--%8.3
```

By default, on OpenBSD the PHP modules aren't enabled, you can do it with:

```
for i in gd curl intl opcache; do ln -s "/etc/php-8.3.sample/${i}.ini" /etc/php-8.3/ ; done
```

Now, enable and start PHP service:

```
rcctl enable php83_fpm
rcctl start php83_fpm
```

If you want Shaarli to be able to do outgoing connections to fetch remote content, you need to make some changes in the chroot directory to make it work, everything is explained in the file `/usr/local/share/doc/pkg-readmes/php-INSTALLED.VERSION`.

## Configure httpd

This guide won't cover the setup for TLS as it's always the same procedure, and it may depend on how you prefer to generate the TLS certificates.

Create the file `/etc/httpd.conf` and add the following content, make sure to replace all the caps text with real values:

```
server "YOUR_HOSTNAME_HERE" {
   listen on * port 80

   # don't rewrite for assets (fonts, images)
   location "/tpl/*" {
       root "/Shaarli/"
   }

   location "/doc/*" {
       root "/Shaarli/"
   }

   location "/cache/*" {
       root "/Shaarli/"
   }

   location "*.php" {
       fastcgi socket "/run/php-fpm.sock"
       root "/Shaarli"
   }

   location "*index.php*" {
       root "/Shaarli"
       fastcgi socket "/run/php-fpm.sock"
   }

   location match "/(.*)" {
       request rewrite "/index.php%1"
   }

   location "/*" {
       root "/Shaarli"
   }
}
```

Enable and start httpd

```
rcctl enable httpd
rcctl start httpd
```

## Configure your firewall

If you configured PF to block by default, you have to open the ports 80 and also 443 if you enable HTTPS.

# Installing Shaarli

Now you should have a working Shaarli upon opening `http://YOUR_HOSTNAME_HERE/index.php/`, all lights should be green, and you are now able to configure the instance as you wish.

# Conclusion

Shaarli is a really handy piece of software, especially for active RSS readers who may have a huge stream of news to read.  What's cool is the share service, and you may allow some people to subscribe to your own feed.
</pre>
   ]]>
 </description>
 <guid>gopher://dataswamp.org:70/1/~solene/article-shaarli-openbsd</guid>
 <link>gopher://dataswamp.org:70/1/~solene/article-shaarli-openbsd</link>
 <pubDate>Tue, 23 Jan 2024 00:00:00 GMT</pubDate>
</item>

 </channel>
</rss>