<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>Joyrex</title>
    <link>https://blog.joyrex.net/</link>
    <description></description>
    <pubDate>Sat, 04 Apr 2026 05:00:32 +0000</pubDate>
    
    <item>
      <title>MeshCore 2: Roof, Community, AliExpress</title>
      <link>https://blog.joyrex.net/meshcore-2-roof-community-aliexpress?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[I have ordered parts from AliExpress, this is getting to become a thing…&#xA;&#xA;!--more--&#xA;&#xA;In my previous post I was talking about how I ordered a solar-power node and was looking forward to setting that up on my roof to act as a repeater. Since then I&#39;ve gotten that going, expanded my collection of gear, and learned some lessons.&#xA;&#xA;Discord / Other Sites&#xA;&#xA;People in the MeshCore public chat sent me an invite to the MeshCoreAus Discord. This is the social hub of enthusiasts from around Australia, and has people interested in all the parts of this network. Some people are into tracking the data on the Internet (via MQTT bridges), some people are into wardriving to map out signal reliability, some people are really into building the gear in the smallest way/lightest way/whatever restrictions they&#39;ve given themselves. There is an extreme amount of knowledge there, and everyone I&#39;ve seen chat is friendly. A rare thing in an online community.&#xA;&#xA;The discord has a resources channel, which really opened my eyes to other information channels. Mainly:&#xA;&#xA;Eastmesh and their sub-sites. Of particular note: a live traffic view got announced yesterday and it&#39;s beautiful. xJARiD (developer of the site) recommended turning on both &#34;Matrix&#34; and &#34;Rain&#34; modes as a bit of a Matrix-y treat.&#xA;MeshCoreAus Wiki has some useful information to people trying to understand the layout and standards of the network. I particularly found \[ENT\] Node Names really useful to identify what kind of stuff I was talking to.&#xA;VK3TWO&#39;s shopping spreadsheet is extremely dangerous. It is super useful, but with all the links to AliExpress and the costs/recommendations given, you start getting ideas... It&#39;s an amazing resource. NB: The spreadsheet goes through different versions and the links to the spreadsheet may change so it&#39;s always best to get the most recent version from the discord post.&#xA;&#xA;So, the community feels extremely healthy. Watching the chatter on the network, I am just one of many that have joined recently and are helping it grow quickly.&#xA;&#xA;The Repeater&#xA;&#xA;Once my SenseCAP Solar Node P1 Pro arrived, I got to flashing it with OTAFIX and MeshCore right away. The process went smoothly, as expected. OTAFIX, I discovered, is a fix to the bootloader that allows you to update the firmware over the air; no more having to plug it into the laptop to give it updates. This is extremely handy when the device is on your roof and you don&#39;t like the roof (more on that later). The OTAFIX installed cleanly (I used the update-xiaonrf52840blebootloader version of the release) and then flashed MeshCore on it using the standard MeshCore flasher. It got the latest version of the firmware (1.14.1), and I also flashed my companion device at the same time.&#xA;&#xA;My neighbour has an old flatbed ute backed onto the corner of my property, so I was able to get on the back of that and set the device on the roof to see how it performed, and the answer is: well! I was suddenly seeing more messages. Anywhere I went in the house, my companion device was giving me better coverage. I was able to compare what I saw via the companion device to what I saw on the discord version of the public/#ping/#test channels.&#xA;&#xA;So, the next step was to get the repeater onto my TV aerial/antenna. I asked my neighbour to borrow his ladder, and he left a standard upsidedown-V shaped one out on the flatbed of the old truck for me. A couple days later it was nice and dry and I decided to get up there and get the thing mounted. This was a bad idea.&#xA;&#xA;On the bed of the truck, I built the attached mounting gear that came with the SenseCAP. Then I opened the ladder and put it on the bed of the truck and started climbing. Due to the height I only had to go up it a bit over halfway up the ladder to then transition to the roof, but it was still took me a minute of working up the nerve. Eventually I did get up there, though, and I was able to slowly shuffle my way to my aerial. I started attaching the strap and gear to the aerial and I realised the strap thing they give in the kit was made for a much larger pole diameter than my aerial used. Someone long ago had welded some square shaped bracket things about halfway up the pole, though, so I figured I could make the strap as tight as possible by myself, then let it rest on the top of the bracket things, so they take the weight and if the strap isn&#39;t perfect, that&#39;s fine.&#xA;&#xA;my neighbour catches me setting up the repeater&#xA;&#xA;I got the repeater hooked up, took a quick glance at my phone and everything seemed OK.. so I started heading back towards the ladder. This is where the fear really set in. I made it to the very edge, but of course, getting back on the ladder is the hardest part. I was sitting up there, uncomfortably trying to shift my weight around and figure out how I was going to do this, when a familiar sensation came over me. I was sweating but cold.. my body was shaking.. I felt like I needed to spew. I was having a full blown panic attack! I had told no one I was doing this.. what happened if the ladder shifted when I tried to put my weight on it? Why did I set it up on the back of a ute that can move up and down too? This was a terrible idea, why did I even try this? What will I do? Should I call someone? Am I just trapped? Around and around my mind went, spiralling until I was about die. I am not made for roofs.&#xA;&#xA;I have had panic attacks before, and have talked myself out of bad trips before, so I just tried to break the cycle.. I tried to think about the walk I did with my dog earlier in the day. I tried to admire the beautiful weather and view. I took notice of what cars were driving by on the road nearby. I focused on taking deep breaths. Anything to make my mind stop spiralling and let my body come down (emotion wise, not fall off the roof wise).&#xA;&#xA;I don’t know how long I was up there on the edge.. 20 minutes maybe? I know it was a while, but finally my body started to come down. Eventually I could think about my situation and not spiral, and I came up with a plan on how I’d get back on the ladder. It worked. I made it. I got off the ladder and onto the back of the ute and just stood there letting the adrenaline finish coursing through my body. I made it. I went inside and drank a bunch of water. I came back out to clean up my stuff and… that’s when I spotted it. I had hooked up the repeater to the aerial, but as part of setting up the mounting gear, I had to detach and re-attach the antenna for the device. I forgot to re-attach it. Now, there are warnings everywhere about running your device without an antenna attached. Apparently you can fry the radio inside, making it “deaf”, but that wasn’t on my mind at the moment. I was just kicking myself that I set up a repeater that couldn’t talk to anything. There was no way in hell I was getting up on that roof again though.&#xA;&#xA;I spoke to my neighbour (a builder and a firey with the cfa) and he had absolutely zero problems going up there and attaching the antenna to the device for me. He made it look so easy….&#xA;&#xA;It was only after everything was re-attached that I remembered that “don’t run a device without an antenna” thing, and mine ran for 24h like that. Doing \some reading\, it seems like running the device without an antenna can push all the transmit power back into chips that don’t expect it. Symptoms will be the device being totally “deaf”, or slowly going deaf over time. My repeater isn’t totally deaf, so I guess I’ll just have to keep an eye on it and see if I start losing data. These transmissions are (I think) low watts.. so maybe my stuff will be OK? We will see. If I have to get up there and replace that thing, though.. ooooohhhh boy will that be something. Might have to buy the neighbour a 6-pack…..&#xA;&#xA;More Devices!&#xA;&#xA;In addition to my normal companion device, I got interested in two other things: something smaller to take with me, and an MQTT device. For both of these, I bought pre-made things so, like the repeater, I have something known good before I start trying to make my own things and have to debug stuff. I also looked at that Google Docs link above, the one with all the info (and aliexpress links) from VK3TWO. So I am also starting to get into the DIY side too. I’ll quickly cover what I got and issues/thoughts:&#xA;&#xA;Tag&#xA;&#xA;I was interested in a more mobile device than my WisMesh Pocket companion. The companion is good, but with the antenna on it, it’s kind of a pain to carry around. There looked to be two main options for this: A SenseCAP Card Tracker T1000-E for Meshtastic and a WisMesh Tag, The Pocket-Sized, Compact Meshtastic Tracker. These are both vaguely credit card-shaped devices with no screens, but bluetooth, gps, and the LoRa radio gear inside. The WisMesh Tag has a bigger battery, so I went with that. It works great. It’s able to get stuff from my repeater no worries -- in fact it&#39;s become my primary device. My WisMesh Pocket is still around and talking, but I haven&#39;t switched to it in the last day. They are different devices on the network (nannou and nannou-tag), so it&#39;s not like they share one &#34;account&#34; or anything like that.&#xA;&#xA;The tag doesn&#39;t do well when driving around, but that&#39;s expected I think. Once we stopped driving it was able to pick up local repeaters and send/receive some data.&#xA;&#xA;MQTT Device&#xA;&#xA;This one was disappointing. I just assumed everything that runs Meshtastic also had MeshCore firmware for it too, but not the WisMesh WiFi Gateway Wireless MQTT Gateway for Meshtastic. At least, not in the normal firmware flasher. I still need to search around.. I think I should be able to get this to work, it might just involve some manual firmware flashing and stuff, which is fine, because that’s the next aspect I want to get into anyway.&#xA;&#xA;AliExpress Gear&#xA;&#xA;One particularly popular brand in the MeshCore community is Heltec, but I&#39;ve largely been playing with RAK wireless-based stuff. For something to play with, I got a couple Heltec v4 kits. One with GPS and one without. I also got a couple sx1262 modules and a bunch of antennas.&#xA;&#xA;All arrived in good condition from AliExpress, but I haven&#39;t played with any of it yet.&#xA;&#xA;Community&#xA;&#xA;As I said before, the AusMeshCore seems healthy, and growing. There&#39;s people in the discord and in the Public chat talking about where they can put new repeaters to extend range. There&#39;s people that check in on the Vic chat from Tasmania (yes, the mesh extends across the Tasman!) and NSW. It&#39;s a fun time!&#xA;&#xA;One thing that doesn&#39;t get mentioned much, although a user (Esh) in the chat has a copy/paste response for people that they&#39;re starting to use to help, is what channels are available on the mesh chat. There&#39;s the Public channel that most everyone is hooked up to, there&#39;s private channels (no idea if those actually get used), and then there&#39;s various other public hashtag based channels. From what I can tell, your device basically tags your message with the hashtag somehow, so other people following that hashtag are able to see it in a seperate channel, otherwise it gets ignored when it gets to another device.&#xA;&#xA;Anyway, in the interest of helping people find new places to converse, here are some of the ones I&#39;ve discovered, either from Esh or from other people mentioning them in the chat. Note this is specific to the Victorian mesh network, but I assume some of the common ones are on other networks too:&#xA;&#xA;Testing/Debug:&#xA;&#xA;ping - say ping, and if the bot hears you, it will respond with a hop count/route. This is on Discord so you can see it there too.&#xA;test - more open, sometimes people will respond to tell you they got your message, sometimes they won’t. If you definitely want a response, ask for it in your message. This is on Discord so you can see it there too.&#xA;meshbot - A more full-featured bot. Type ‘help’ in the channel to get the bot to list what commands you can use. ‘multitest’ is popular (lists different paths to you)&#xA;&#xA;General:&#xA;&#xA;politics - I think people from all ranges of politics are on here, so it’s interesting. Everyone seemed to agree Albo’s recently “everything is OK, don’t panic” national address was a bit of a joke though.&#xA;jokes - Speaking of jokes, soooooooo many puns and dad jokes are in here. You can tell what the primary user of this technology is. I love it.&#xA;electronics&#xA;space&#xA;motorcycles&#xA;random&#xA;&#xA;Location-based:&#xA;&#xA;geelong&#xA;gippsland&#xA;bendigo&#xA;ballarat&#xA;&#xA;Those are the ones I’ve seen mentioned, but I imagine there are ones for other areas too.&#xA;&#xA;Conclusion 2&#xA;&#xA;I’m still loving this. I find in the evenings I’m checking the public chat similar to the way I’m looking at my discord or matrix chats. It’s a fun community.&#xA;&#xA;The roof experience was bad. That’s one horse I don’t think I’m going to get back up on. Maybe if I had a better/proper ladder.. or maybe just a 10m long ramp so I can just easily walk up and down to get to the roof like normal 😉. After an hour, I was still shaking badly.. it wasn’t a fun time.. but I survived.&#xA;&#xA;My next steps are to look at the MQTT gateway to see if I can get that working and reporting back to eastmesh, and to play with the AliExpress gear. I also need to get my 3d printer re-leveled and printing again so I can print some cases for the stuff I’m going to build.&#xA;&#xA;Onwards!]]&gt;</description>
      <content:encoded><![CDATA[<p>I have ordered parts from AliExpress, this is getting to become a thing…</p>



<p>In my <a href="https://blog.joyrex.net/my-adventures-with-meshtastic-meshcore-so-far-qf48">previous post</a> I was talking about how I ordered a solar-power node and was looking forward to setting that up on my roof to act as a repeater. Since then I&#39;ve gotten that going, expanded my collection of gear, and learned some lessons.</p>

<h2 id="discord-other-sites" id="discord-other-sites">Discord / Other Sites</h2>

<p>People in the MeshCore public chat sent me an invite to the <a href="https://discord.gg/2DEc3fj2ZF">MeshCoreAus Discord</a>. This is the social hub of enthusiasts from around Australia, and has people interested in all the parts of this network. Some people are into tracking the data on the Internet (via MQTT bridges), some people are into wardriving to map out signal reliability, some people are really into building the gear in the smallest way/lightest way/whatever restrictions they&#39;ve given themselves. There is an extreme amount of knowledge there, and everyone I&#39;ve seen chat is friendly. A rare thing in an online community.</p>

<p>The discord has a resources channel, which really opened my eyes to other information channels. Mainly:</p>
<ul><li><a href="https://eastmesh.au">Eastmesh</a> and their sub-sites. Of particular note: <a href="https://core.eastmesh.au/#/live">a live traffic view</a> got announced yesterday and it&#39;s beautiful. xJARiD (developer of the site) recommended turning on both “Matrix” and “Rain” modes as a bit of a Matrix-y treat.</li>
<li><a href="https://wiki.meshcoreaus.org/">MeshCoreAus Wiki</a> has some useful information to people trying to understand the layout and standards of the network. I particularly found <a href="https://wiki.meshcoreaus.org/books/ent-naming/page/ent-node-names">[ENT] Node Names</a> really useful to identify what kind of stuff I was talking to.</li>
<li><a href="https://docs.google.com/spreadsheets/d/14mpwnYbR-dK2Sh8i8vfCVXD8N-ZR3uM9/edit?usp=sharing&amp;ouid=110657726513738198715&amp;rtpof=true&amp;sd=true">VK3TWO&#39;s shopping spreadsheet</a> is extremely dangerous. It is super useful, but with all the links to AliExpress and the costs/recommendations given, you start getting ideas... It&#39;s an amazing resource. NB: The spreadsheet goes through different versions and the links to the spreadsheet may change so it&#39;s always best to get the most recent version from the <a href="https://discord.com/channels/1446313890505035900/1459090032643014801">discord post</a>.</li></ul>

<p>So, the community feels extremely healthy. Watching the chatter on the network, I am just one of many that have joined recently and are helping it grow quickly.</p>

<h2 id="the-repeater" id="the-repeater">The Repeater</h2>

<p>Once my <a href="https://iot-store.com.au/products/sensecap-solar-node-p1-for-meshtastic">SenseCAP Solar Node P1 Pro</a> arrived, I got to flashing it with OTAFIX and MeshCore right away. The process went smoothly, as expected. <a href="https://github.com/oltaco/Adafruit_nRF52_Bootloader_OTAFIX">OTAFIX</a>, I discovered, is a fix to the bootloader that allows you to update the firmware over the air; no more having to plug it into the laptop to give it updates. This is extremely handy when the device is on your roof and you don&#39;t like the roof (more on that later). The OTAFIX installed cleanly (I used the <code>update-xiao_nrf52840_ble_bootloader</code> version of the release) and then flashed MeshCore on it using the standard <a href="https://meshcore.co.uk/flasher.html">MeshCore flasher</a>. It got the latest version of the firmware (1.14.1), and I also flashed my companion device at the same time.</p>

<p>My neighbour has an old flatbed ute backed onto the corner of my property, so I was able to get on the back of that and set the device on the roof to see how it performed, and the answer is: well! I was suddenly seeing more messages. Anywhere I went in the house, my companion device was giving me better coverage. I was able to compare what I saw via the companion device to what I saw on the discord version of the public/<a href="https://blog.joyrex.net/tag:ping" class="hashtag"><span>#</span><span class="p-category">ping</span></a>/<a href="https://blog.joyrex.net/tag:test" class="hashtag"><span>#</span><span class="p-category">test</span></a> channels.</p>

<p>So, the next step was to get the repeater onto my TV aerial/antenna. I asked my neighbour to borrow his ladder, and he left a standard upsidedown-V shaped one out on the flatbed of the old truck for me. A couple days later it was nice and dry and I decided to get up there and get the thing mounted. This was a bad idea.</p>

<p>On the bed of the truck, I built the attached mounting gear that came with the SenseCAP. Then I opened the ladder and put it on the bed of the truck and started climbing. Due to the height I only had to go up it a bit over halfway up the ladder to then transition to the roof, but it was still took me a minute of working up the nerve. Eventually I did get up there, though, and I was able to slowly shuffle my way to my aerial. I started attaching the strap and gear to the aerial and I realised the strap thing they give in the kit was made for a much larger pole diameter than my aerial used. Someone long ago had welded some square shaped bracket things about halfway up the pole, though, so I figured I could make the strap as tight as possible by myself, then let it rest on the top of the bracket things, so they take the weight and if the strap isn&#39;t perfect, that&#39;s fine.</p>

<p><img src="https://i.snap.as/RPzIT6tr.png" alt=""/>
<em>my neighbour catches me setting up the repeater</em></p>

<p>I got the repeater hooked up, took a quick glance at my phone and everything seemed OK.. so I started heading back towards the ladder. This is where the fear really set in. I made it to the very edge, but of course, getting back on the ladder is the hardest part. I was sitting up there, uncomfortably trying to shift my weight around and figure out how I was going to do this, when a familiar sensation came over me. I was sweating but cold.. my body was shaking.. I felt like I needed to spew. I was having a full blown panic attack! I had told no one I was doing this.. what happened if the ladder shifted when I tried to put my weight on it? Why did I set it up on the back of a ute that can move up and down too? This was a terrible idea, why did I even try this? What will I do? Should I call someone? Am I just trapped? Around and around my mind went, spiralling until I was about die. I am not made for roofs.</p>

<p>I have had panic attacks before, and have talked myself out of bad trips before, so I just tried to break the cycle.. I tried to think about the walk I did with my dog earlier in the day. I tried to admire the beautiful weather and view. I took notice of what cars were driving by on the road nearby. I focused on taking deep breaths. Anything to make my mind stop spiralling and let my body come down (emotion wise, not fall off the roof wise).</p>

<p>I don’t know how long I was up there on the edge.. 20 minutes maybe? I know it was a while, but finally my body started to come down. Eventually I could think about my situation and not spiral, and I came up with a plan on how I’d get back on the ladder. It worked. I made it. I got off the ladder and onto the back of the ute and just stood there letting the adrenaline finish coursing through my body. I made it. I went inside and drank a bunch of water. I came back out to clean up my stuff and… that’s when I spotted it. I had hooked up the repeater to the aerial, but as part of setting up the mounting gear, I had to detach and re-attach the antenna for the device. I forgot to re-attach it. Now, there are warnings everywhere about running your device without an antenna attached. Apparently you can fry the radio inside, making it “deaf”, but that wasn’t on my mind at the moment. I was just kicking myself that I set up a repeater that couldn’t talk to anything. There was no way in hell I was getting up on that roof again though.</p>

<p>I spoke to my neighbour (a builder and a firey with the cfa) and he had absolutely zero problems going up there and attaching the antenna to the device for me. He made it look so easy….</p>

<p>It was only after everything was re-attached that I remembered that “don’t run a device without an antenna” thing, and mine ran for 24h like that. Doing [some reading](<a href="https://old.reddit.com/r/meshcore/comments/1pggezm/have_i_been_really_stupid/">https://old.reddit.com/r/meshcore/comments/1pggezm/have_i_been_really_stupid/</a>), it seems like running the device without an antenna can push all the transmit power back into chips that don’t expect it. Symptoms will be the device being totally “deaf”, or slowly going deaf over time. My repeater isn’t totally deaf, so I guess I’ll just have to keep an eye on it and see if I start losing data. These transmissions are (I think) low watts.. so maybe my stuff will be OK? We will see. If I have to get up there and replace that thing, though.. ooooohhhh boy will that be something. Might have to buy the neighbour a 6-pack…..</p>

<h2 id="more-devices" id="more-devices">More Devices!</h2>

<p>In addition to my normal companion device, I got interested in two other things: something smaller to take with me, and an MQTT device. For both of these, I bought pre-made things so, like the repeater, I have something known good before I start trying to make my own things and have to debug stuff. I also looked at that Google Docs link above, the one with all the info (and aliexpress links) from VK3TWO. So I am also starting to get into the DIY side too. I’ll quickly cover what I got and issues/thoughts:</p>

<h3 id="tag" id="tag">Tag</h3>

<p>I was interested in a more mobile device than my WisMesh Pocket companion. The companion is good, but with the antenna on it, it’s kind of a pain to carry around. There looked to be two main options for this: A <a href="https://iot-store.com.au/products/sensecap-tracker-t1000-e-meshtastic">SenseCAP Card Tracker T1000-E for Meshtastic</a> and a <a href="https://iot-store.com.au/products/wismesh-tag-meshtastic-tracker">WisMesh Tag, The Pocket-Sized, Compact Meshtastic Tracker</a>. These are both vaguely credit card-shaped devices with no screens, but bluetooth, gps, and the LoRa radio gear inside. The WisMesh Tag has a bigger battery, so I went with that. It works great. It’s able to get stuff from my repeater no worries — in fact it&#39;s become my primary device. My WisMesh Pocket is still around and talking, but I haven&#39;t switched to it in the last day. They are different devices on the network (nannou and nannou-tag), so it&#39;s not like they share one “account” or anything like that.</p>

<p>The tag doesn&#39;t do well when driving around, but that&#39;s expected I think. Once we stopped driving it was able to pick up local repeaters and send/receive some data.</p>

<h3 id="mqtt-device" id="mqtt-device">MQTT Device</h3>

<p>This one was disappointing. I just assumed everything that runs Meshtastic also had MeshCore firmware for it too, but not the <a href="https://iot-store.com.au/products/wismesh-wifi-gateway-meshtastic">WisMesh WiFi Gateway Wireless MQTT Gateway for Meshtastic</a>. At least, not in the normal firmware flasher. I still need to search around.. I think I should be able to get this to work, it might just involve some manual firmware flashing and stuff, which is fine, because that’s the next aspect I want to get into anyway.</p>

<h3 id="aliexpress-gear" id="aliexpress-gear">AliExpress Gear</h3>

<p>One particularly popular brand in the MeshCore community is Heltec, but I&#39;ve largely been playing with RAK wireless-based stuff. For something to play with, I got a couple Heltec v4 kits. One <a href="https://www.aliexpress.com/item/1005010168788998.html">with GPS</a> and <a href="https://www.aliexpress.com/item/1005010177636156.html">one without</a>. I also got a couple <a href="https://www.aliexpress.com/item/1005008094638318.html">sx1262 modules</a> and a <a href="https://www.aliexpress.com/item/1005006673760959.html">bunch of antennas</a>.</p>

<p>All arrived in good condition from AliExpress, but I haven&#39;t played with any of it yet.</p>

<h2 id="community" id="community">Community</h2>

<p>As I said before, the AusMeshCore seems healthy, and growing. There&#39;s people in the discord and in the Public chat talking about where they can put new repeaters to extend range. There&#39;s people that check in on the Vic chat from Tasmania (yes, the mesh extends across the Tasman!) and NSW. It&#39;s a fun time!</p>

<p>One thing that doesn&#39;t get mentioned much, although a user (Esh) in the chat has a copy/paste response for people that they&#39;re starting to use to help, is what channels are available on the mesh chat. There&#39;s the Public channel that most everyone is hooked up to, there&#39;s private channels (no idea if those actually get used), and then there&#39;s various other public hashtag based channels. From what I can tell, your device basically tags your message with the hashtag somehow, so other people following that hashtag are able to see it in a seperate channel, otherwise it gets ignored when it gets to another device.</p>

<p>Anyway, in the interest of helping people find new places to converse, here are some of the ones I&#39;ve discovered, either from Esh or from other people mentioning them in the chat. Note this is specific to the Victorian mesh network, but I assume some of the common ones are on other networks too:</p>

<h4 id="testing-debug" id="testing-debug">Testing/Debug:</h4>
<ul><li><a href="https://blog.joyrex.net/tag:ping" class="hashtag"><span>#</span><span class="p-category">ping</span></a> – say ping, and if the bot hears you, it will respond with a hop count/route. This is on Discord so you can see it there too.</li>
<li><a href="https://blog.joyrex.net/tag:test" class="hashtag"><span>#</span><span class="p-category">test</span></a> – more open, sometimes people will respond to tell you they got your message, sometimes they won’t. If you definitely want a response, ask for it in your message. This is on Discord so you can see it there too.</li>
<li><a href="https://blog.joyrex.net/tag:meshbot" class="hashtag"><span>#</span><span class="p-category">meshbot</span></a> – A more full-featured bot. Type ‘help’ in the channel to get the bot to list what commands you can use. ‘multitest’ is popular (lists different paths to you)</li></ul>

<p><strong>General:</strong></p>
<ul><li><a href="https://blog.joyrex.net/tag:politics" class="hashtag"><span>#</span><span class="p-category">politics</span></a> – I think people from all ranges of politics are on here, so it’s interesting. Everyone seemed to agree Albo’s recently “everything is OK, don’t panic” national address was a bit of a joke though.</li>
<li><a href="https://blog.joyrex.net/tag:jokes" class="hashtag"><span>#</span><span class="p-category">jokes</span></a> – Speaking of jokes, soooooooo many puns and dad jokes are in here. You can tell what the primary user of this technology is. I love it.</li>
<li><a href="https://blog.joyrex.net/tag:electronics" class="hashtag"><span>#</span><span class="p-category">electronics</span></a></li>
<li><a href="https://blog.joyrex.net/tag:space" class="hashtag"><span>#</span><span class="p-category">space</span></a></li>
<li><a href="https://blog.joyrex.net/tag:motorcycles" class="hashtag"><span>#</span><span class="p-category">motorcycles</span></a></li>
<li><a href="https://blog.joyrex.net/tag:random" class="hashtag"><span>#</span><span class="p-category">random</span></a></li></ul>

<h4 id="location-based" id="location-based">Location-based:</h4>
<ul><li><a href="https://blog.joyrex.net/tag:geelong" class="hashtag"><span>#</span><span class="p-category">geelong</span></a></li>
<li><a href="https://blog.joyrex.net/tag:gippsland" class="hashtag"><span>#</span><span class="p-category">gippsland</span></a></li>
<li><a href="https://blog.joyrex.net/tag:bendigo" class="hashtag"><span>#</span><span class="p-category">bendigo</span></a></li>
<li><a href="https://blog.joyrex.net/tag:ballarat" class="hashtag"><span>#</span><span class="p-category">ballarat</span></a></li></ul>

<p>Those are the ones I’ve seen mentioned, but I imagine there are ones for other areas too.</p>

<h2 id="conclusion-2" id="conclusion-2">Conclusion 2</h2>

<p>I’m still loving this. I find in the evenings I’m checking the public chat similar to the way I’m looking at my discord or matrix chats. It’s a fun community.</p>

<p>The roof experience was bad. That’s one horse I don’t think I’m going to get back up on. Maybe if I had a better/proper ladder.. or maybe just a 10m long ramp so I can just easily walk up and down to get to the roof like normal 😉. After an hour, I was still shaking badly.. it wasn’t a fun time.. but I survived.</p>

<p>My next steps are to look at the MQTT gateway to see if I can get that working and reporting back to eastmesh, and to play with the AliExpress gear. I also need to get my 3d printer re-leveled and printing again so I can print some cases for the stuff I’m going to build.</p>

<p>Onwards!</p>
]]></content:encoded>
      <guid>https://blog.joyrex.net/meshcore-2-roof-community-aliexpress</guid>
      <pubDate>Sat, 04 Apr 2026 00:55:28 +0000</pubDate>
    </item>
    <item>
      <title>My Adventures with Meshtastic/MeshCore (so far)</title>
      <link>https://blog.joyrex.net/my-adventures-with-meshtastic-meshcore-so-far-qf48?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[YouTube has gotten me into another niche tech thing…&#xA;&#xA;!--more--&#xA;&#xA;I was watching a Youtube video about how Iran started up a new numbers station since the new war started, and how it got jammed on its original frequency and was moving to another one. It’s wild that Iran is falling back to old tech and the US and Israel just can’t handle it, but that’s not what this post is about.&#xA;&#xA;After seeing the video, Youtube suggested another of the channel’s video, which was titled The Idiots Guide To Meshtastic - Long Range Comms! “Hey, I’m an idiot,” I thought “long range comms in a little handheld device could be cool!”  I’ve always been curious about radio communication even though my knowledge level is very low, and my enthusiasm about having to mount gear on giant poles outside is even lower. Short wave seems to require that type of outside gear, but watching this video, that didn’t seem the case for Meshtastic. Off to Kagi I went to find an Aussie store that sold this gear.&#xA;&#xA;I ended up at IoT Store, a Perth-based place that had a Meshtastic area in their online shop. After some random browsing and reading, I ended up getting a WisMesh Pocket V2 Meshtastic Device, and on impulse I threw in a LoRa Antenna Kit to increase my range. I was again pleasantly surprised that increasing my range didn’t involve adding something I had to post outside and figure out how to run electricity to (I rent).&#xA;&#xA;A few days later the gear arrived, so time to go!&#xA;&#xA;Meshtastic&#xA;&#xA;I’m not going to review the device itself. It uses a WisBlock RAK4631 chip, which seems pretty common and effective for this purpose, and the device seems to work fine. It has an on/off switch, and a single button you can use for browsing menus (long pressing to select stuff). The Meshtastic firmware was a bit out of date, but connecting to the device over USB using the web-based flasher in a chrome-based browser worked fine.&#xA;&#xA;I jumped on using the Meshtastic app on my Android phone, hoping to see it start to pick up nearby nodes, and……. nothing.&#xA;&#xA;I was looking at most of the state and there were no nodes. Uh oh.. maybe I should have done some more investigation before buying.&#xA;&#xA;I posted on Mastodon, and some very helpful people told me that I may have to let it run overnight to see if it picks up any nodes, but also Meshtastic wasn’t great at scaling, and that most people in Victoria (my state in Australia) had moved to MeshCore. Luckily, Meshtastic and MeshCore use the same gear and the same frequencies, so my Meshtastic device should be able to get onto the MeshCore network with some extra work.&#xA;&#xA;I let Meshtastic run on my device for 3-4 days, and it found no one. It’s possible I would have found Meshtastic nodes if I had put something up outside to give better range/etc, but that’s exactly what I wanted to avoid. Time to try MeshCore…&#xA;&#xA;MeshCore&#xA;&#xA;Using the same sort of flashing method, but using the MeshCore flasher website instead, I was able to get the firmware installed. It is \slightly\ less noob-friendly (at least to me), and I spent some time trying to figure out why my phone wasn’t able to connect to the new MeshCore-firmware-flashed device. It turns out in the flashing process you have to choose “Companion Bluetooth” to enable the bluetooth radio on the device. I was choosing “Companion USB” as I was flashing via USB, but that wasn’t the way to do it. After that, I was able to connect to it on my phone using the MeshCore app.&#xA;&#xA;A kind person on Mastodon had already told me that Victoria MeshCore people use the “Australia (Narrow)” radio settings to communicate, so I was able to set that:&#xA;&#xA;I saved my settings and checked the map anddddddddd.. nothing. uh oh.&#xA;&#xA;I was more confident this time, though. I \knew\ the people were out there, and that Victoria had a good MeshCore network (thanks again Mastodon people). Potentially I had to put something up outside (ugh), but first I had a new app to click random buttons in to see if I could get anything.&#xA;&#xA;At the top of the app is a radio icon. I hit that and had the option of “Advert - Zero Hop” and “Advert - Flood Routed”. Just by the names, zero hop seemed to be contacting everyone close to me, and so I guessed that meant Flood Routed meant it would push everywhere. I did Zero Hop first, and after about 5-10 seconds, saw nothing, so I try Flood Routed… then I tried Flood Routed again 30 seconds later.. and.. I started getting notifications of nodes that were being discovered! It was working!&#xA;&#xA;Oddly, and I have no idea how this works, it was discovering nodes around Albury/Wodonga and one on the other side of Melbourne. Weird. But it was working.. and someone had posted to the public chat! I could see that! I tried to send a message asking for someone to confirm they could see me, but got no response. Damn.&#xA;&#xA;I went to bed for the night. When I woke up the next morning and went back to the app, I was seeing over 100 nodes!&#xA;&#xA;This was great! And there were overnight chats in the public channel! All this was happening after about 9 hours of being on. I was stoked.&#xA;&#xA;I sent another message to the chat asking for confirmation. After sending this, I noticed instead of saying “Sent” under the message, it said “Heard 1 Repeat”. This clued me in that the chat client in the app shows stuff is actually sent if I hear it repeated back to me at least once. When it says “Sent” and doesn’t update to “Heard \# Repeat(s)”, it means the message didn’t make it out. Good to know.&#xA;&#xA;I can explain the early timestamps: I have a cat that likes to wake me up around 5-5:30 in the morning.&#xA;&#xA;Anyway, this was great news. I left it and started my day, and checked in later in the afternoon. I had (literally) hundreds of new nodes listed!&#xA;&#xA;There was even a repeater in NSW that I had seen (not directly, but through the network).&#xA;&#xA;It’s now been a couple days and I have maxed out my contacts (nodes) list. The device can only hold 350 nodes, and by default it will add every node that is mentioned on the network. Maxing it out in a couple days is huge! I have ticked an option that cycles out the oldest seen nodes to add the new ones, so I think my list will stay at 350 contacts now.&#xA;&#xA;What’s Next / Annoyances&#xA;&#xA;The public chat is a mix of people testing and people chatting about life or whatever. Yesterday a person visiting Melbourne from Denver, CO, USA hopped on and said g’day. They had brought their MeshCore device down with them. They said Denver is just starting to build its MeshCore network and they liked how popular ours was.&#xA;&#xA;I have found that I get about a 33% success rate of my messages actually making it out to a repeater on the first try. Thankfully the app has the option to long-press the message and say “Send Again”, to let it try and send out again. After a couple tries, it generally makes it out. That was annoying me, so… I’m somewhat doing what I didn’t want to do: I’m buying something to put outside.&#xA;&#xA;As was pointed out to me in the chat, part of the fun of MeshCore (and similar) is building your own devices with the different radio boards/whatever, but for this purchase, I went for another pre-built thing so I can be sure it’s not my terrible soldering if it doesn’t work. I purchased a SenseCAP Solar Node P1 Pro, which I plan to flash with MeshCore in repeater mode. Then I plan to put it somewhere outside, and hope the solar is enough that I don’t have to try and run power to it. I am well aware that higher/line of site is better, but I still don’t want to mount a pole to my roof, so I’m planning just to set it somewhere outside, maybe just on my roof, or hanging off it somewhere. We’ll see, but I’m hopeful that extra little access of being outside (instead of my bedroom where the WisBlock is right now) will give me clear access to the multiple repeaters that around me, and I won’t need the height.&#xA;&#xA;Conclusion&#xA;&#xA;I think it’s extremely cool that this invisible network exists and there’s a large group dedicated to helping everyone communicate, either doing it for fun hobby reasons, or “real” reasons. One of the things pushed with Meshtastic/MeshCore is it can be used on rural sites when hiking/on farms/etc where signal won’t reach, and I’m sure it works great for that. It’s sweet this exists and is being run across Victoria’s suburb wasteland around Melbourne, as well as across the state as a whole. I am excited to see how well my external repeater helps my message sending, as well as feeling good that I might be helping out others in my immediate area (1km around me, after that they’ll be closer to another repeater around here) that are on the network (if any). I’m also looking forward to learning about setting up the repeater itself. It scratches that nerd itch.&#xA;&#xA;Things are weird right now in the world, and the Internet is being enshittified more every day. Here’s something that’s pure, done by people for the love of it. It’s great.]]&gt;</description>
      <content:encoded><![CDATA[<p>YouTube has gotten me into another niche tech thing…</p>



<p>I was watching a <a href="https://youtu.be/UZSThymsKyA" title="Youtube video">Youtube video</a> about how Iran started up a new numbers station since the new war started, and how it got jammed on its original frequency and was moving to another one. It’s wild that Iran is falling back to old tech and the US and Israel just can’t handle it, but that’s not what this post is about.</p>

<p>After seeing the video, Youtube suggested another of the channel’s video, which was titled <a href="https://www.youtube.com/watch?v=N3FXej9fqIk" title="The Idiots Guide To Meshtastic - Long Range Comms!">The Idiots Guide To Meshtastic – Long Range Comms!</a> “Hey, I’m an idiot,” I thought “long range comms in a little handheld device could be cool!”  I’ve always been curious about radio communication even though my knowledge level is very low, and my enthusiasm about having to mount gear on giant poles outside is even lower. Short wave seems to require that type of outside gear, but watching this video, that didn’t seem the case for Meshtastic. Off to Kagi I went to find an Aussie store that sold this gear.</p>

<p>I ended up at <a href="https://iot-store.com.au/collections/meshtastic" title="IoT Store link to its Meshtastic Collection">IoT Store</a>, a Perth-based place that had a Meshtastic area in their online shop. After some random browsing and reading, I ended up getting a <a href="https://iot-store.com.au/products/wismesh-pocket-v2-meshtastic" title="WisMesh Pocket V2 Meshtastic Device">WisMesh Pocket V2 Meshtastic Device</a>, and on impulse I threw in a <a href="https://iot-store.com.au/products/lora-antenna-kit" title="LoRa Antenna Kit Rubber Duck Foldable 2dBi 900-930 MHz">LoRa Antenna Kit</a> to increase my range. I was again pleasantly surprised that increasing my range didn’t involve adding something I had to post outside and figure out how to run electricity to (I rent).</p>

<p>A few days later the gear arrived, so time to go!</p>

<h3 id="meshtastic" id="meshtastic">Meshtastic</h3>

<p>I’m not going to review the device itself. It uses a WisBlock RAK4631 chip, which seems pretty common and effective for this purpose, and the device seems to work fine. It has an on/off switch, and a single button you can use for browsing menus (long pressing to select stuff). The Meshtastic firmware was a bit out of date, but connecting to the device over USB using the <a href="https://flasher.meshtastic.org" title="web-based flasher">web-based flasher</a> in a chrome-based browser worked fine.</p>

<p>I jumped on using the Meshtastic app on my Android phone, hoping to see it start to pick up nearby nodes, and……. nothing.</p>

<p><img src="https://i.snap.as/DR2yqvVe.jpeg" alt=""/></p>

<p>I was looking at most of the state and there were no nodes. Uh oh.. maybe I should have done some more investigation before buying.</p>

<p>I <a href="https://social.joyrex.net/@ejstacey/116209085162942875" title="my post on mastodon about meshtastic">posted on Mastodon</a>, and some very helpful people told me that I may have to let it run overnight to see if it picks up any nodes, but also Meshtastic wasn’t great at scaling, and that most people in Victoria (my state in Australia) had moved to MeshCore. Luckily, Meshtastic and MeshCore use the same gear and the same frequencies, so my Meshtastic device should be able to get onto the MeshCore network with some extra work.</p>

<p>I let Meshtastic run on my device for 3-4 days, and it found no one. It’s possible I would have found Meshtastic nodes if I had put something up outside to give better range/etc, but that’s exactly what I wanted to avoid. Time to try MeshCore…</p>

<h3 id="meshcore" id="meshcore">MeshCore</h3>

<p>Using the same sort of flashing method, but using the <a href="https://flasher.meshcore.co.uk" title="MeshCore flasher website">MeshCore flasher website</a> instead, I was able to get the firmware installed. It is *slightly* less noob-friendly (at least to me), and I spent some time trying to figure out why my phone wasn’t able to connect to the new MeshCore-firmware-flashed device. It turns out in the flashing process you have to choose “Companion Bluetooth” to enable the bluetooth radio on the device. I was choosing “Companion USB” as I was flashing via USB, but that wasn’t the way to do it. After that, I was able to connect to it on my phone using the MeshCore app.</p>

<p>A kind person on Mastodon had already told me that Victoria MeshCore people use the “Australia (Narrow)” radio settings to communicate, so I was able to set that:</p>

<p><img src="https://i.snap.as/dIy2dczc.jpeg" alt=""/></p>

<p>I saved my settings and checked the map anddddddddd.. nothing. uh oh.</p>

<p>I was more confident this time, though. I *knew* the people were out there, and that Victoria had a good MeshCore network (thanks again Mastodon people). Potentially I had to put something up outside (ugh), but first I had a new app to click random buttons in to see if I could get anything.</p>

<p>At the top of the app is a radio icon. I hit that and had the option of “Advert – Zero Hop” and “Advert – Flood Routed”. Just by the names, zero hop seemed to be contacting everyone close to me, and so I guessed that meant Flood Routed meant it would push everywhere. I did Zero Hop first, and after about 5-10 seconds, saw nothing, so I try Flood Routed… then I tried Flood Routed again 30 seconds later.. and.. I started getting notifications of nodes that were being discovered! It was working!</p>

<p>Oddly, and I have no idea how this works, it was discovering nodes around Albury/Wodonga and one on the other side of Melbourne. Weird. But it was working.. and someone had posted to the public chat! I could see that! I tried to send a message asking for someone to confirm they could see me, but got no response. Damn.</p>

<p>I went to bed for the night. When I woke up the next morning and went back to the app, I was seeing over 100 nodes!</p>

<p><img src="https://i.snap.as/46MEvnhK.jpeg" alt=""/></p>

<p>This was great! And there were overnight chats in the public channel! All this was happening after about 9 hours of being on. I was stoked.</p>

<p>I sent another message to the chat asking for confirmation. After sending this, I noticed instead of saying “Sent” under the message, it said “Heard 1 Repeat”. This clued me in that the chat client in the app shows stuff is actually sent if I hear it repeated back to me at least once. When it says “Sent” and doesn’t update to “Heard # Repeat(s)”, it means the message didn’t make it out. Good to know.</p>

<p><img src="https://i.snap.as/gpNSDqGH.jpeg" alt=""/></p>

<p>I can explain the early timestamps: I have a cat that likes to wake me up around 5-5:30 in the morning.</p>

<p>Anyway, this was great news. I left it and started my day, and checked in later in the afternoon. I had (literally) hundreds of new nodes listed!</p>

<p><img src="https://i.snap.as/ghxmwZ1d.jpeg" alt=""/></p>

<p>There was even a repeater in NSW that I had seen (not directly, but through the network).</p>

<p>It’s now been a couple days and I have maxed out my contacts (nodes) list. The device can only hold 350 nodes, and by default it will add every node that is mentioned on the network. Maxing it out in a couple days is huge! I have ticked an option that cycles out the oldest seen nodes to add the new ones, so I think my list will stay at 350 contacts now.</p>

<h3 id="what-s-next-annoyances" id="what-s-next-annoyances">What’s Next / Annoyances</h3>

<p>The public chat is a mix of people testing and people chatting about life or whatever. Yesterday a person visiting Melbourne from Denver, CO, USA hopped on and said g’day. They had brought their MeshCore device down with them. They said Denver is just starting to build its MeshCore network and they liked how popular ours was.</p>

<p>I have found that I get about a 33% success rate of my messages actually making it out to a repeater on the first try. Thankfully the app has the option to long-press the message and say “Send Again”, to let it try and send out again. After a couple tries, it generally makes it out. That was annoying me, so… I’m somewhat doing what I didn’t want to do: I’m buying something to put outside.</p>

<p>As was pointed out to me in the chat, part of the fun of MeshCore (and similar) is building your own devices with the different radio boards/whatever, but for this purchase, I went for another pre-built thing so I can be sure it’s not my terrible soldering if it doesn’t work. I purchased a <a href="https://iot-store.com.au/products/sensecap-solar-node-p1-for-meshtastic" title="SenseCAP Solar Node P1 and P1 Pro for Meshtastic">SenseCAP Solar Node P1 Pro</a>, which I plan to flash with MeshCore in repeater mode. Then I plan to put it somewhere outside, and hope the solar is enough that I don’t have to try and run power to it. I am well aware that higher/line of site is better, but I still don’t want to mount a pole to my roof, so I’m planning just to set it somewhere outside, maybe just on my roof, or hanging off it somewhere. We’ll see, but I’m hopeful that extra little access of being outside (instead of my bedroom where the WisBlock is right now) will give me clear access to the multiple repeaters that around me, and I won’t need the height.</p>

<h3 id="conclusion" id="conclusion">Conclusion</h3>

<p>I think it’s extremely cool that this invisible network exists and there’s a large group dedicated to helping everyone communicate, either doing it for fun hobby reasons, or “real” reasons. One of the things pushed with Meshtastic/MeshCore is it can be used on rural sites when hiking/on farms/etc where signal won’t reach, and I’m sure it works great for that. It’s sweet this exists and is being run across Victoria’s suburb wasteland around Melbourne, as well as across the state as a whole. I am excited to see how well my external repeater helps my message sending, as well as feeling good that I might be helping out others in my immediate area (1km around me, after that they’ll be closer to another repeater around here) that are on the network (if any). I’m also looking forward to learning about setting up the repeater itself. It scratches that nerd itch.</p>

<p>Things are weird right now in the world, and the Internet is being enshittified more every day. Here’s something that’s pure, done by people for the love of it. It’s great.</p>
]]></content:encoded>
      <guid>https://blog.joyrex.net/my-adventures-with-meshtastic-meshcore-so-far-qf48</guid>
      <pubDate>Thu, 19 Mar 2026 09:15:51 +0000</pubDate>
    </item>
    <item>
      <title>256TB AliExpress Drive Testing</title>
      <link>https://blog.joyrex.net/256tb-aliexpress-drive-testing?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[This isn’t a new thing, but I wanted to explore it myself.&#xA;&#xA;I found a 256T portable drive on AliExpress for $31USD. I had to check it out.&#xA;&#xA;!--more--&#xA;&#xA;The Hard Disk&#xA;&#xA;Someone on Discord mentioned that AliExpress’s 11.11 sale was coming up, so I browsed the app to see if there’s any stupid stuff I wanted to buy (spoiler alert: there was), but one thing that stuck out was USB portable drive. Not only was it super cheap, but it ranged in sizes from 1T to 256T! Amazing! A steal!&#xA;&#xA;After buying, it arrived the following week. It said 256TB on the box, so they were sticking with that claim I guess. My first impression was it was LIGHT. Obviously no spinning disks in the case, but it was light even if it was going to hold solid state storage. I’m beginning to think it wasn’t the amazing deal I thought it’d be.&#xA;&#xA;Opening up the box, you can instantly tell the “metal housing” is actually just cheap plastic. The hard disk (I’m going to keep calling it a hard disk despite it not being one) does actually use a USB-C plug, surprisingly. It comes with a USB-C to USB-A cable, as well as two adapters: one USB-A to USB-C and one USB-A to USB-A micro.&#xA;&#xA;Using a plastic spudger tool, it was pretty easy to crack open the plastic case and see what was inside.&#xA;&#xA;As you can see, the “hard disk” is just a couple sd cards hot glued into a some slots, with a controller for each, and then a chip to the right that likely handles the USB traffic (I think, I’m not good at identifying uses of chips). What’s interesting(ish) is they lasered off the top of the three ICs, so you can’t identify what chips they are.&#xA;&#xA;So two 128T micro sd cards. Looking at sd card info, the specification does exist to have a maximum 128T on a card with SDUC, but there’s no, or very few, commercial products even using the standard so far, and definitely not for cheap. Obviously, the cards are a lie too, assuming they are supposed to be 128T each.&#xA;&#xA;At this point I had posted the above photos to a couple different chats. Some people were guessing it was going to present as two 128T drives instead of a single 256T, others thought it might show up as a single slow USB 2 drive. It was time to find out just how bad this was.&#xA;&#xA;Connecting&#xA;&#xA;I grabbed an old laptop and put a clean copy of Fedora KDE 43 on it (no way I was plugging this into anything that holds my real data), as soon as that was done, I plugged in the hard disk and… nothing. Dolphin, the KDE file manager, didn’t present any removable devices. Looking at dmesg and /dev though, I was able to identify two drives attached, each 128T:&#xA;&#xA;It instantly wasn’t happy, though:&#xA;&#xA;The critical target errors continued a bit more before it finally settled down. So, good start.&#xA;&#xA;Looking at the partitions in fdisk, each disk had two. A small “Microsoft reserved” partition (gpt code 0c01), and then a \~128T fat partition, except it had the fs-type of NTFS specified (or gpt code 0700, “Microsoft basic data”, which might be fine for exfat.. I’m just used to that being NTFS).&#xA;&#xA;Anyway, mounting /dev/sd\[ab\]2 into separate directories with some default settings (the only thing I did was have the mount be owned by my user account), I can now start some testing.&#xA;&#xA;Testing and Stats&#xA;&#xA;To start with, I used bonnie++ to do some basic disk writing and reading. Each test took hours to run.. glad I wasn’t doing it on my normal machine as I could just set the laptop aside and focus on whatever else I was doing without interfering with these tests.  I did three tests: one on sda by itself, one on sdb by itself, and one with both sda and sdb running at the same time. This basically took a day to run them all.  For all of them, I used the command bonnie++ -d /mnt/sdX2&#xA;&#xA;This does the standard test reading and writing files to the mounted drive. I then used boncsv2html to collate the results into an html file. It does the colouring itself. The html results are linked here (and the csv source is here), but if you don’t like clicking here’s a screenshot:&#xA;&#xA;As you can see, it sucks. Latency actually reaches out of microseconds range into the seconds range in some cases. Reads are worse than writes, but I think that’s because it’s not actually writing to these fake/hacked sd cards, so it can fly.&#xA;&#xA;After this I was going to use badblocks to see what that would do, but badblocks apparently doesn’t work with large filesystems, where numbers go out of the 32-bit range and into the 64-bit. So with a quick kagi search, I ended up finding f3 (“fight flash fraud”), something made specifically for these shenanigans.&#xA;&#xA;Scanning the two drives with f3 (using f3probe —destructive —time-ops /dev/sdX_), I got similar results for both:&#xA;&#xA;It instantly recognised these were junk.&#xA;&#xA;I wanted to do a reading/writing test with the f3 tools, just to see, but I figured I’d redo the partitions first to see if I could get it to format as ext4. I went into disk, deleted all the partitions, and then created a single partition on each disk, gpt type 8300 (Linux Filesystem).  I then tried to format the drives as ext4, but as expected, it didn’t work:&#xA;&#xA;Attempting to mount the partitions as ext4 then failed. I might be able to make it at least mount by using a filesystem that doesn’t try to write superblocks throughout it, but for now I think I’m done.&#xA;&#xA;Results/Conclusion&#xA;&#xA;As expected, this “hard disk” is just fake rubbish. It’s interesting to dig in and see just how bad it is, though. At some point I’ll probably scrape the hot glue off and plug the sd cards into an adapter to see if I can read/use them normally, but I’m sure they’re bottom of the barrel in quality. Thanks AliExpress!]]&gt;</description>
      <content:encoded><![CDATA[<p>This isn’t a new thing, but I wanted to explore it myself.</p>

<p>I found a 256T portable drive on AliExpress for $31USD. I had to check it out.</p>



<h3 id="the-hard-disk" id="the-hard-disk">The Hard Disk</h3>

<p>Someone on Discord mentioned that AliExpress’s 11.11 sale was coming up, so I browsed the app to see if there’s any stupid stuff I wanted to buy (spoiler alert: there was), but one thing that stuck out was USB portable drive. Not only was it super cheap, but it ranged in sizes from 1T to 256T! Amazing! <a href="https://www.aliexpress.com/item/1005009862541174.html?spm=a2g0o.order_list.order_list_main.47.2e511802Qmtssd" title="link to hard disk">A steal</a>!</p>

<p><img src="https://i.snap.as/eQgkqiTv.jpg" alt=""/></p>

<p>After buying, it arrived the following week. It said 256TB on the box, so they were sticking with that claim I guess. My first impression was it was LIGHT. Obviously no spinning disks in the case, but it was light even if it was going to hold solid state storage. I’m beginning to think it wasn’t the amazing deal I thought it’d be.</p>

<p><img src="https://i.snap.as/iMGnqZ9k.jpg" alt=""/></p>

<p>Opening up the box, you can instantly tell the “metal housing” is actually just cheap plastic. The hard disk (I’m going to keep calling it a hard disk despite it not being one) does actually use a USB-C plug, surprisingly. It comes with a USB-C to USB-A cable, as well as two adapters: one USB-A to USB-C and one USB-A to USB-A micro.</p>

<p><img src="https://i.snap.as/XpS4rob1.jpg" alt=""/></p>

<p>Using a plastic spudger tool, it was pretty easy to crack open the plastic case and see what was inside.</p>

<p><img src="https://i.snap.as/2dGrCRWG.jpg" alt=""/><img src="https://i.snap.as/UUpy5oKt.jpg" alt=""/></p>

<p>As you can see, the “hard disk” is just a couple sd cards hot glued into a some slots, with a controller for each, and then a chip to the right that likely handles the USB traffic (I think, I’m not good at identifying uses of chips). What’s interesting(ish) is they lasered off the top of the three ICs, so you can’t identify what chips they are.</p>

<p>So two 128T micro sd cards. Looking at <a href="https://en.wikipedia.org/wiki/SD_card" title="sd card info from wikipedia">sd card info</a>, the specification does exist to have a maximum 128T on a card with SDUC, but there’s no, or very few, commercial products even using the standard so far, and definitely not for cheap. Obviously, the cards are a lie too, assuming they are supposed to be 128T each.</p>

<p>At this point I had posted the above photos to a couple different chats. Some people were guessing it was going to present as two 128T drives instead of a single 256T, others thought it might show up as a single slow USB 2 drive. It was time to find out just how bad this was.</p>

<h3 id="connecting" id="connecting">Connecting</h3>

<p>I grabbed an old laptop and put a clean copy of Fedora KDE 43 on it (no way I was plugging this into anything that holds my real data), as soon as that was done, I plugged in the hard disk and… nothing. Dolphin, the KDE file manager, didn’t present any removable devices. Looking at dmesg and /dev though, I was able to identify two drives attached, each 128T:</p>

<p><img src="https://i.snap.as/NOBiqpGz.png" alt=""/>
It instantly wasn’t happy, though:</p>

<p><img src="https://i.snap.as/ph5bMire.png" alt=""/></p>

<p>The critical target errors continued a bit more before it finally settled down. So, good start.</p>

<p>Looking at the partitions in fdisk, each disk had two. A small “Microsoft reserved” partition (gpt code 0c01), and then a ~128T fat partition, except it had the fs-type of NTFS specified (or gpt code 0700, “Microsoft basic data”, which might be fine for exfat.. I’m just used to that being NTFS).</p>

<p>Anyway, mounting /dev/sd[ab]2 into separate directories with some default settings (the only thing I did was have the mount be owned by my user account), I can now start some testing.</p>

<h3 id="testing-and-stats" id="testing-and-stats">Testing and Stats</h3>

<p>To start with, I used bonnie++ to do some basic disk writing and reading. Each test took hours to run.. glad I wasn’t doing it on my normal machine as I could just set the laptop aside and focus on whatever else I was doing without interfering with these tests.  I did three tests: one on sda by itself, one on sdb by itself, and one with both sda and sdb running at the same time. This basically took a day to run them all.  For all of them, I used the command <em>bonnie++ -d /mnt/sdX2</em></p>

<p>This does the standard test reading and writing files to the mounted drive. I then used bon_csv2html to collate the results into an html file. It does the colouring itself. <a href="https://assets.joyrex.net/hd-stats.html">The html results are linked here</a> (<a href="https://assets.joyrex.net/hd-stats.csv" title="csv source for hd stats">and the csv source is here</a>), but if you don’t like clicking here’s a screenshot:</p>

<p><img src="https://i.snap.as/y9CNzFNS.png" alt=""/></p>

<p>As you can see, it sucks. Latency actually reaches out of microseconds range into the seconds range in some cases. Reads are worse than writes, but I think that’s because it’s not actually writing to these fake/hacked sd cards, so it can fly.</p>

<p>After this I was going to use badblocks to see what that would do, but badblocks apparently doesn’t work with large filesystems, where numbers go out of the 32-bit range and into the 64-bit. So with a quick kagi search, I ended up finding <a href="https://github.com/AltraMayor/f3?tab=readme-ov-file" title="link to f3 repo">f3 (“fight flash fraud”)</a>, something made specifically for these shenanigans.</p>

<p>Scanning the two drives with f3 (using <em>f3probe —destructive —time-ops /dev/sdX</em>), I got similar results for both:</p>

<p><img src="https://i.snap.as/HKm5U2XG.png" alt=""/>
<img src="https://i.snap.as/gVfC07C0.png" alt=""/></p>

<p>It instantly recognised these were junk.</p>

<p>I wanted to do a reading/writing test with the f3 tools, just to see, but I figured I’d redo the partitions first to see if I could get it to format as ext4. I went into disk, deleted all the partitions, and then created a single partition on each disk, gpt type 8300 (Linux Filesystem).  I then tried to format the drives as ext4, but as expected, it didn’t work:</p>

<p><img src="https://i.snap.as/Gdp68Rni.png" alt=""/></p>

<p>Attempting to mount the partitions as ext4 then failed. I might be able to make it at least mount by using a filesystem that doesn’t try to write superblocks throughout it, but for now I think I’m done.</p>

<h3 id="results-conclusion" id="results-conclusion">Results/Conclusion</h3>

<p>As expected, this “hard disk” is just fake rubbish. It’s interesting to dig in and see just how bad it is, though. At some point I’ll probably scrape the hot glue off and plug the sd cards into an adapter to see if I can read/use them normally, but I’m sure they’re bottom of the barrel in quality. Thanks AliExpress!</p>
]]></content:encoded>
      <guid>https://blog.joyrex.net/256tb-aliexpress-drive-testing</guid>
      <pubDate>Sun, 23 Nov 2025 02:01:46 +0000</pubDate>
    </item>
    <item>
      <title>Moving On From Proton</title>
      <link>https://blog.joyrex.net/moving-on-from-proton?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[I’ve been a paying member of Proton for almost 10 years (10 years in February 2026, it looks like), so I’ve been heavily hooked into their ecosystem. My every-day stuff was ProtonMail, ProtonVPN, and ProtonPass. I synced stuff to ProtonDrive as a backup of a backup, and I used ProtonCalendar as my personal calendar. I don’t do a heavy amount of calendaring, however, so I don’t count that as one of my main products. I’m moving on from Proton, so I thought I’d document the issues I’ve had moving, and the alternatives for the different products.&#xA;&#xA;!--more--&#xA;&#xA;History/Why Proton&#xA;&#xA;A screenshot of my subscription history with Proton&#xA;&#xA;Before Proton, I was a Google user (gmail/photos/etc). Their growing evilness caused me to look at what else was out there, and two companies were getting started to give an alternative to gmail: ProtonMail and Tuta (then called TutaNota). I started with Tuta and used it for a few months (I think? maybe it was like 6 months-1yr), but Proton was doing faster work to get an email client that had some of the nice features gmail had gotten me used to, so I switched to them. At some point they added Calendar, VPN, Drive, Pass and other stuff, and as the new features came out I just migrated over since it was part of my subscription and the features were useful. Drive was too slow to come out, so I never started using it heavily.. I had already moved on to other stuff to get off Google Drive, and I was used to my replacements already. Generally, though, I was all in on Proton’s stuff.&#xA;&#xA;Now/Why Move On&#xA;&#xA;Proton has, in general, been pretty good. I know people have complaints about data they’ve shared with authorities before (here’s one example, here’s proton’s official transparency report), and they’re looking to get out of Switzerland, but either way it’s way better than what Google provides. Plus their ecosystem is pretty smooth, and they seem to be working on improving things on their existing products. Unfortunately, they (to my view) also try to get new products out really quickly, instead of reiterating to improve and add missing features to their existing ecosystem in a timely manner. This leads to annoyances where they announce something new and you feel like you’re being ignored (like slow-to-release desktop clients/etc). Also, their subreddits are heavily moderated by them, so it all feels a bit controlled without being able to get authenticity in reviews/etc). Those are minor though, things I could live with. What convinced me they’re going down a bad path I don’t want to spend money on/don’t want to support is three-fold:&#xA;&#xA;Strike 1: Bitcoin Wallet&#xA;&#xA;This came out in a time where core features to their existing products were still lacking. I don’t remember exactly what, but it was likely desktop apps. I just remember it left a bad taste in my mouth.&#xA;&#xA;I think Bitcoin CAN have a use (besides buying drugs online), but like a lot of technologies, douchebag techbros have turned it into something that is more harmful and damaging than useful. Proton going down this route sucked.&#xA;&#xA;Strike 2: CEO’s Trump Support&#xA;&#xA;CEO of Proton (Andy Yen) tweeting his support for Trump&#39;s AG of antirtrust pick tweeting his support for Trump&#39;s AG of antirtrust pick&#34;)&#xA;&#xA;In early 2025, someone figured out that the CEO of Proton tweeted (on his personal account) his support for Trump’s pick of Gail Slater as Assistant AG for Antitrust at the US Department of Justice. This reeked of techbro liberalism (which means Andy is probably a conservative shithead who just won’t admit it). Proton posted a reply on the subreddit about how corporate democrats need to be brought down (which imho is true) and that’s why republicans are a good choice (uhhhh.. what the fuck?). This blew up further, which panicked Proton. They deleted the message and put out a response from Andy.&#xA;&#xA;Personally, my opinion is the world is full of techbro losers that think they’re going to save the world by being conservatives who want good things only for them. This is just another example.&#xA;&#xA;Strike 3: GenAI&#xA;&#xA;In mid 2024, Proton announced Proton Scribe (also I think they tried to backdoor introduce it, which gave it a bit of a Streisand Effect). This is a tool for businesses to help you write emails using AI. This can go without saying, but what a fucking waste of resources. This caused an uproar in the community, because real tech users can clearly see how over-hyped and wasteful Generative AI is. However, it was business only, so whatever.&#xA;&#xA;Fast forward to July 2025, and Proton announces Proton Lumo, a full AI chatbot that respects your privacy blah blah blah. This is where I drew the line. Again, GenAI is a wasteful, over-hyped technology being heavily used to destroy the world. Just like crypto. It’s just the next iteration of brainless CEOs and their sycophants jumping to some technology that is going to “revolutionise the world”, but it doesn’t. It just causes pain and misery. There’s stories coming out often now about people becoming emotionally dependent on AI chatbots that are made to re-affirm everything the person does or says. This isn’t healthy.&#xA;&#xA;At this point, I decided it’s time to jump ship. I am not going to keep giving money to a company that is clearly captured by right wing shitheads. Proton claims the subscription prices don’t fund their R&amp;D lab, which is where these shitty products like Scribe, Wallet, and Lumo are coming from, but those people obviously aren’t working for free. Meanwhile, there’s still (for example) no Proton Drive desktop client for Linux. With my subscription being up in a few months, this felt like the perfect time to jump and make sure I am settled in with the new stuff and all my stuff is moved over OK before I lose the old copies on Proton.&#xA;&#xA;The Alternatives&#xA;&#xA;To do the move, I had to evaluate what Proton stuff I used, and what alternatives are out there. It was up to me how much I wanted to investigate each different replacement. This was similar from when I moved from Google, so I was used to this.&#xA;&#xA;One thing I read about is don’t put all my eggs in one basket. Not that I have much choice: I don’t think there’s any other privacy-focused providers out there that provide the same stuff that Proton provides.&#xA;&#xA;ProtonMail&#xA;&#xA;As I said above, I actually first stated with Tutanota (now Tuta), but they were falling behind on updates and polish on their email service, so I jumped to Proton. That was 10 years ago, and Tuta has completely caught up on mail (for my use cases). So, it seemed like the clear choice to migrate back to. They \mostly\ are the same as ProtonMail, except they do a bit more encryption (in ProtonMail, only the message body is encrypted. All the headers (From, To, Subject, Date, etc) are plaintext. Tuta encrypts it all. They both allow custom domains with full SPF/DKIM/DMARC/etc/etc.&#xA;&#xA;I used the proton-bridge software (in a container) to let local services send email out (alerts, email verification for stuff I run, etc). This software hooks into your proton account and keeps a local copy of your email. Then it provides an IMAP and SMTP interface that lets you use any normal email client with your proton account. I only used the SMTP interface to relay mail. Due to the way Tuta does their encryption, there’s no bridge-type software for Tuta accounts. So, for SMTP relays, there’s services like mailgun and smtp2go. For the amount of low-volume email I send from my services, I could conceivably just use their free tiers, but something felt off to me for using giant mail services on their free tiers. So I kept searching until I found a mail relay I liked and seemed good. I found Dynu, which has a $10USD/yr service that seemed good. I think the $10 is a nice low price while also hopefully acting as a wall to prevent spammers from using it and potentially ruining their reputation.&#xA;&#xA;So, drumroll…&#xA;&#xA;ProtonMail Replacement: Tuta&#xA;&#xA;SMTP Relay (proton-bridge) Replacement: Dynu Outbound SMTP Relay&#xA;&#xA;ProtonCalendar&#xA;&#xA;I don’t do any serious calendaring, so Tuta’s encrypted calendar is fine.&#xA;&#xA;ProtonCalendar Replacement: Tuta&#xA;&#xA;ProtonVPN&#xA;&#xA;I primarily use the VPN service’s mobile apps on my phone/tablet, and wireguard on computers/servers. For me, I’ve heard great thing about mullvad for years, so it seemed like a no-brainer.&#xA;&#xA;One thing ProtonVPN offers that mullvad doesn’t is port forwarding for certain traffic that needs to go through the VPN and be available on a certain port. A couple years ago mullvad removed that feature due to abuse. This could screw up my qbittorrent instance that goes through the VPN so people don’t know where they’re getting their Linux ISOs from. Oddly, this didn’t seem to be an issue in practice. More below.&#xA;&#xA;ProtonVPN Replacement: Mullvad&#xA;&#xA;ProtonPass&#xA;&#xA;Before ProtonPass I had used LastPass (before they started to absolutely suck), then Vaultwarden (an alternative Bitwarden client-compliant server). People can use Bitwarden with their stuff being stored on Bitwarden’s servers, but I decided to self-host on my local kube cluster using vaultwarden.&#xA;&#xA;ProtonPass Replacement: Vaultwarden&#xA;&#xA;ProtonDrive&#xA;&#xA;As I said above, I didn’t heavily use ProtonDrive. I have a Synology NAS so I use that, with a cloud service for extra backups off-site.&#xA;&#xA;ProtonDrive Replacement: SynologyDrive &amp; iDrive&#xA;&#xA;The Move&#xA;&#xA;Now that I have made my choices, I had to start moving stuff. I am going to document the software/services in the order I did the move.&#xA;&#xA;Dynu Outbound SMTP Relay&#xA;&#xA;This was the first thing I wanted to replace. Previously I had a proton-bridge container running in my kube cluster, and had various services pointing to it for sending email to myself and others. To replace this, I set up a new account with them. Then I set up a new kube deployment running a namshi/smtp container, with the environment variables configured to forward to the dynu relay. I put this in so if I move in the future I can just update the config here once, instead of going to my various services to update their smtp config.&#xA;&#xA;The most time was spent realising I needed to have a subdomain for the relay, and then had to attach SPF and whatever records against that, while still making sure I didn’t screw up mail sending with my ProtonMail.&#xA;&#xA;Vaultwarden&#xA;&#xA;This was another container deployment (actually a helm chart). The install had no real issues, but afterwards there are a couple minor things:&#xA;&#xA;The desktop Bitwarden app won’t log in. I tried the Flatpak and AppImage, as well as the Fedora-built version. It just bombs out and says “an unexpected error has occured”. The android app and firefox plugin work, so besides looking at the issue pages of both bitwarden and vaultwarden, I haven’t done much digging.&#xA;   Update: This was because I had ingress-anubis in front of the vaultwarden instance. The desktop app must use a user agent that anubis considers an interactive session, so it tries to do its stuff and that doesn’t work. Switching to just the normal ingress-nginx ingress fixed it.&#xA;&#xA;Exporting from ProtonPass and importing to Vaultwarden doesn’t include passkeys. I think this is by design, because the passkeys have to be tied to the passkey software? I’m not sure, but I need to go through and set up a new set of passkeys. I haven’t done this yet, but I’m hopeful the protonpass interface lets me list all passkeys so I know where I need to go set up new ones.&#xA;&#xA;Mullvad&#xA;&#xA;Mullvad doesn’t use accounts like a standard provider, you just get a random number and that’s all you can use. You cannot lose that number, because it’s not tied to an email or anything, so if you lose the number, there’s no recovery.&#xA;&#xA;Mullvad’s Wireguard config generator worked fine. It’d be nice if I could relabel the keys it generates but no big deal.&#xA;&#xA;Mullvad’s Android App was picking up the wrong language for some reason. I had English (Australia) and Español (Mexico) as my options, and from what I can tell the app didn’t support English (Australia) so it would default to Español. I stopped setting Español (Mexico) as an available language in my phone and it started working in English again.&#xA;&#xA;Mullvad w/ my VPN’d bittorrent setup is interesting. I am definitely using my mullvad wireguard connection for torrenting, but it’s showing it’s not firewalled at all (which presumably it should be), and uploads are HEAPS faster. In addition, torrents that previously weren’t connecting for seeding are connecting. So I don’t get it, but it works, and works well.&#xA;&#xA;Tuta&#xA;&#xA;This was more of an adventure, largely due to the migration of existing mail.&#xA;&#xA;I initially signed up for Tuta’s Revolutionary tier, which gives me custom domains (up to three; I only need one). To import old mail, though, you have to sign up for the Legend tier. Presumably I could sign up for a month, import all my mail and then go back to Revolutionary, but I had already purchased a year of Revolutionary (stupid move), so upgrading to Legend for the year made more sense with the pro-rata. I’m lucky I’m able to absorb the cost, but it’s something to keep in mind.&#xA;&#xA;Anyway, proton provides an email exporter. This dumps all your emails into a directory in .eml format (1/email), as well as metadata in json (1/email). This can be a ton of files depending on how much mail you have in your Proton account (and I imported my old gmail into Proton so it was a lot).  The filenames are encrypted long names (I think hashes of the email content to ensure uniqueness? not sure).&#xA;&#xA;The importer for Tuta requires the Legend tier, as mentioned, but also you have to run the desktop client to do the import. I had trouble getting the desktop client flatpak to work. It would bomb out when trying to log in. The AppImage version worked. The flatpak is called (Beta) so I guess it’s still a WIP. When I have more time I’m going to do a bit more testing and maybe check the issues for Tuta and see if it’s new or a known thing.&#xA;&#xA;Now that the desktop client was functional and I was upgraded to the Legend tier, it was time to do the import. I run Fedora 42 KDE right now, so when I went to select the email files to import, it uses the Dolphin browser. Selecting tens of thousands of eml files with long \~90-character long names just causes Dolphin to complain, so the directory needs to be split up a bit. To that end, I did the following things:&#xA;&#xA;Deleted all the metadata.json files. They aren’t used. You can’t just rm \.metadata.json if you have too many files, it’ll be too long of a command for your shell/rm. So:&#xA;  find . -name ‘.metadata.json’ -exec rm {} \;&#xA;Sorted all the entries into subdirectories based off the first character in their filename:&#xA;  for i in a b c d e f g h i j k l m n o p q r s t u v w x y z A B C D E F G H I J K L M N O P Q R S T U V W X Y Z  0 1 2 3 4 5 6 7 8 9;&#xA;  do&#xA;      mkdir $i;&#xA;      mv $i.eml $i;&#xA;  done&#xA;Renamed all the files so they’re shorter and Dolphin doesn’t barf&#xA;  for i in a b c d e f g h i j k l m n o p q r s t u v w x y z A B C D E F G H I J K L M N O P Q R S T U V W X Y Z _ 0 1 2 3 4 5 6 7 8 9;&#xA;  do&#xA;      var=0&#xA;      for eml in $i/.eml;&#xA;      do&#xA;          mv $eml $i/$var.eml&#xA;          ((var++));&#xA;      done&#xA;  done&#xA;&#xA;There’s probably better ways to get that range of letters/numbers but whatever. I only had to type them out once then could copy/paste as needed. This gave me 63 directories with 800-900 files in each, with the files named 0.eml, 1.eml … 890.eml, etc. Also the root directory has \~900 files in it that start with -, which I didn’t put in the overall for loop to avoid accidentally passing a filename as an argument to mv and breaking stuff… But… they still needed to be renamed, so I had to end up doing it anyway.&#xA;&#xA;From here, I could select all the (previously named) -\.eml files and import them. Then, I had to go to each subdirectory and import each set of files in 800-900 file blocks using clickops in the Tuta desktop app. It took time (in fact, I wrote the majority of this while I kept alt-tabbing every few minutes to process the next block).&#xA;&#xA;Once the mail was done, it was a simple matter to export the Proton Calendar and Contacts (into .ics and .vcf files) and import them into Tuta.&#xA;&#xA;The Pricing&#xA;&#xA;I moved off Proton for ideological reasons, but I thought it would be worth examining the price differences, if any. I haven’t actually priced this out seriously, so this will be new to me too. I will do this in Euro since the majority of the services I’m comparing are in Euro. I am not including ProtonDrive because I never really used it, besides as a backup to my existing stuff, so the existing stuff exists in both cases.&#xA;&#xA;Proton Unlimited (1 year) Subscription: 95.19€. I think that’s slightly cheaper because I bought two years at once, because looking on their site the equivalent for a new subscriber for 1y would be 120€.&#xA;&#xA;Tuta (1 year) Subscription Revolutionary: 36€, but I had to upgrade to Legendary, and did it for a year, so that becomes 96€. I should be able to drop back down to Revolutionary next year. If you go down this route I suggest just starting with a month of Legendary, get the email imported, then go by the year for Revolutionary if you’re able.&#xA;&#xA;Mullvad (1 year): 60€&#xA;&#xA;Dynu (1 year): \\8.50€&#xA;&#xA;Totalling&#xA;\\*&#xA;Proton: 95€ (me) / 120€ (new)&#xA;Replacements: 164.5€ (first year) / 104.5 (future years)&#xA;&#xA;So, more expensive right now due to my eagerness to do year subscriptions, and fairly competitive after that, while not supporting the AI/crypto BS Proton are doing. I’m happy with it.&#xA;&#xA;Final Thoughts&#xA;&#xA;There’s not many ways we can vote with our wallet these days. Everything is amalgamated into this common evil that everyone does. That isn’t the case with Tuta, and to some extent, Proton, but I fear Proton is headed that way. This has been an interesting experience, and bugs are still apparent in both Proton and its replacements, but it’s more than functional.&#xA;&#xA;I enjoy this nerd stuff so moving over was about 8-10 hours (spread over two days) of doing setups/migrations.&#xA;&#xA;I subscribe to and use Kagi for searching, and they provide an AI product too, so I questioned myself about why I was OK with their AI product (not using it, but that they’re offering it) and not Proton. I think it’s because of the other points against Proton. All together it gives me an overall picture where they’re going to be sliding into AI in everything, while Kagi keeps stuff very separate and it doesn’t seem to hurt their main product (a spectacular search experience).]]&gt;</description>
      <content:encoded><![CDATA[<p>I’ve been a paying member of Proton for almost 10 years (10 years in February 2026, it looks like), so I’ve been heavily hooked into their ecosystem. My every-day stuff was ProtonMail, ProtonVPN, and ProtonPass. I synced stuff to ProtonDrive as a backup of a backup, and I used ProtonCalendar as my personal calendar. I don’t do a heavy amount of calendaring, however, so I don’t count that as one of my main products. I’m moving on from Proton, so I thought I’d document the issues I’ve had moving, and the alternatives for the different products.</p>



<h2 id="history-why-proton" id="history-why-proton">History/Why Proton</h2>

<p><img src="https://i.snap.as/VHozJ649.png" alt="A screenshot of my subscription history with Proton" title="A screenshot of my subscription history with Proton"/></p>

<p>Before Proton, I was a Google user (gmail/photos/etc). Their growing evilness caused me to look at what else was out there, and two companies were getting started to give an alternative to gmail: ProtonMail and Tuta (then called TutaNota). I started with Tuta and used it for a few months (I think? maybe it was like 6 months-1yr), but Proton was doing faster work to get an email client that had some of the nice features gmail had gotten me used to, so I switched to them. At some point they added Calendar, VPN, Drive, Pass and other stuff, and as the new features came out I just migrated over since it was part of my subscription and the features were useful. Drive was too slow to come out, so I never started using it heavily.. I had already moved on to other stuff to get off Google Drive, and I was used to my replacements already. Generally, though, I was all in on Proton’s stuff.</p>

<h2 id="now-why-move-on" id="now-why-move-on">Now/Why Move On</h2>

<p>Proton has, in general, been pretty good. I know people have complaints about data they’ve shared with authorities before (<a href="https://techcrunch.com/2021/09/06/protonmail-logged-ip-address-of-french-activist-after-order-by-swiss-authorities/">here’s one example</a>, <a href="https://proton.me/legal/transparency">here’s proton’s official transparency report</a>), and they’re looking to get out of Switzerland, but either way it’s way better than what Google provides. Plus their ecosystem is pretty smooth, and they seem to be working on improving things on their existing products. Unfortunately, they (to my view) also try to get new products out really quickly, instead of reiterating to improve and add missing features to their existing ecosystem in a timely manner. This leads to annoyances where they announce something new and you feel like you’re being ignored (like slow-to-release desktop clients/etc). Also, their subreddits are heavily moderated by them, so it all feels a bit controlled without being able to get authenticity in reviews/etc). Those are minor though, things I could live with. What convinced me they’re going down a bad path I don’t want to spend money on/don’t want to support is three-fold:</p>

<h3 id="strike-1-bitcoin-wallet" id="strike-1-bitcoin-wallet">Strike 1: Bitcoin Wallet</h3>

<p>This came out in a time where core features to their existing products were still lacking. I don’t remember exactly what, but it was likely desktop apps. I just remember it left a bad taste in my mouth.</p>

<p>I think Bitcoin CAN have a use (besides buying drugs online), but like a lot of technologies, douchebag techbros have turned it into something that is more harmful and damaging than useful. Proton going down this route sucked.</p>

<h3 id="strike-2-ceo-s-trump-support" id="strike-2-ceo-s-trump-support">Strike 2: CEO’s Trump Support</h3>

<p><img alt="CEO of Proton (Andy Yen) tweeting his support for Trump&#39;s AG of antirtrust pick"/> tweeting his support for Trump&#39;s AG of antirtrust pick”)</p>

<p>In early 2025, someone figured out that the CEO of Proton tweeted (on his personal account) his support for Trump’s pick of Gail Slater as Assistant AG for Antitrust at the US Department of Justice. This reeked of techbro liberalism (which means Andy is probably a conservative shithead who just won’t admit it). Proton posted a reply on the subreddit about how corporate democrats need to be brought down (which imho is true) and that’s why republicans are a good choice (uhhhh.. what the fuck?). This blew up further, which panicked Proton. They deleted the message and <a href="https://www.reddit.com/r/ProtonMail/comments/1i2nz9v/on_politics_and_proton_a_message_from_andy/">put out a response from Andy</a>.</p>

<p>Personally, my opinion is the world is full of techbro losers that think they’re going to save the world by being conservatives who want good things only for them. This is just another example.</p>

<h3 id="strike-3-genai" id="strike-3-genai">Strike 3: GenAI</h3>

<p>In mid 2024, Proton <a href="https://proton.me/blog/proton-scribe-writing-assistant">announced Proton Scribe</a> (also I think they tried to backdoor introduce it, which gave it a bit of a Streisand Effect). This is a tool for businesses to help you write emails using AI. This can go without saying, but what a fucking waste of resources. This caused an uproar in the community, because real tech users can clearly see how over-hyped and wasteful Generative AI is. However, it was business only, so whatever.</p>

<p>Fast forward to July 2025, and Proton <a href="https://proton.me/blog/lumo-ai">announces Proton Lumo</a>, a full AI chatbot that respects your privacy blah blah blah. This is where I drew the line. Again, GenAI is a wasteful, over-hyped technology being heavily used to destroy the world. Just like crypto. It’s just the next iteration of brainless CEOs and their sycophants jumping to some technology that is going to “revolutionise the world”, but it doesn’t. It just causes pain and misery. There’s stories coming out often now about people becoming emotionally dependent on AI chatbots that are made to re-affirm everything the person does or says. This isn’t healthy.</p>

<p>At this point, I decided it’s time to jump ship. I am not going to keep giving money to a company that is clearly captured by right wing shitheads. Proton claims the subscription prices don’t fund their R&amp;D lab, which is where these shitty products like Scribe, Wallet, and Lumo are coming from, but those people obviously aren’t working for free. Meanwhile, there’s still (for example) no Proton Drive desktop client for Linux. With my subscription being up in a few months, this felt like the perfect time to jump and make sure I am settled in with the new stuff and all my stuff is moved over OK before I lose the old copies on Proton.</p>

<h2 id="the-alternatives" id="the-alternatives">The Alternatives</h2>

<p>To do the move, I had to evaluate what Proton stuff I used, and what alternatives are out there. It was up to me how much I wanted to investigate each different replacement. This was similar from when I moved from Google, so I was used to this.</p>

<p>One thing I read about is don’t put all my eggs in one basket. Not that I have much choice: I don’t think there’s any other privacy-focused providers out there that provide the same stuff that Proton provides.</p>

<h3 id="protonmail" id="protonmail">ProtonMail</h3>

<p>As I said above, I actually first stated with Tutanota (now <a href="https://tuta.com/">Tuta</a>), but they were falling behind on updates and polish on their email service, so I jumped to Proton. That was 10 years ago, and Tuta has completely caught up on mail (for my use cases). So, it seemed like the clear choice to migrate back to. They *mostly* are the same as ProtonMail, except they do a bit more encryption (in ProtonMail, only the message body is encrypted. All the headers (From, To, Subject, Date, etc) are plaintext. Tuta encrypts it all. They both allow custom domains with full SPF/DKIM/DMARC/etc/etc.</p>

<p>I used the proton-bridge software (in a container) to let local services send email out (alerts, email verification for stuff I run, etc). This software hooks into your proton account and keeps a local copy of your email. Then it provides an IMAP and SMTP interface that lets you use any normal email client with your proton account. I only used the SMTP interface to relay mail. Due to the way Tuta does their encryption, there’s no bridge-type software for Tuta accounts. So, for SMTP relays, there’s services like <a href="https://www.mailgun.com/">mailgun</a> and <a href="https://www.smtp2go.com/">smtp2go</a>. For the amount of low-volume email I send from my services, I could conceivably just use their free tiers, but something felt off to me for using giant mail services on their free tiers. So I kept searching until I found a mail relay I liked and seemed good. I found Dynu, which has a $10USD/yr service that seemed good. I think the $10 is a nice low price while also hopefully acting as a wall to prevent spammers from using it and potentially ruining their reputation.</p>

<p>So, drumroll…</p>

<p><strong>ProtonMail Replacement:</strong> <a href="https://tuta.com/">Tuta</a></p>

<p><strong>SMTP Relay (proton-bridge) Replacement</strong>: <a href="https://www.dynu.com/en-US/Email/Outbound-SMTP-Relay">Dynu Outbound SMTP Relay</a></p>

<h3 id="protoncalendar" id="protoncalendar">ProtonCalendar</h3>

<p>I don’t do any serious calendaring, so Tuta’s encrypted calendar is fine.</p>

<p><strong>ProtonCalendar Replacement:</strong> <a href="https://tuta.com/">Tuta</a></p>

<h3 id="protonvpn" id="protonvpn">ProtonVPN</h3>

<p>I primarily use the VPN service’s mobile apps on my phone/tablet, and wireguard on computers/servers. For me, I’ve heard great thing about mullvad for years, so it seemed like a no-brainer.</p>

<p>One thing ProtonVPN offers that mullvad doesn’t is port forwarding for certain traffic that needs to go through the VPN and be available on a certain port. A couple years ago <a href="https://mullvad.net/en/blog/removing-the-support-for-forwarded-ports">mullvad removed that feature</a> due to abuse. This could screw up my qbittorrent instance that goes through the VPN so people don’t know where they’re getting their Linux ISOs from. Oddly, this didn’t seem to be an issue in practice. More below.</p>

<p><strong>ProtonVPN Replacement:</strong> <a href="https://mullvad.net/en">Mullvad</a></p>

<h3 id="protonpass" id="protonpass">ProtonPass</h3>

<p>Before ProtonPass I had used LastPass (before they started to absolutely suck), then Vaultwarden (an alternative Bitwarden client-compliant server). People can use Bitwarden with their stuff being stored on Bitwarden’s servers, but I decided to self-host on my local kube cluster using vaultwarden.</p>

<p><strong>ProtonPass Replacement:</strong> <a href="https://github.com/dani-garcia/vaultwarden">Vaultwarden</a></p>

<h3 id="protondrive" id="protondrive">ProtonDrive</h3>

<p>As I said above, I didn’t heavily use ProtonDrive. I have a Synology NAS so I use that, with a cloud service for extra backups off-site.</p>

<p><strong>ProtonDrive Replacement:</strong> <a href="https://www.synology.com/en-global/dsm/feature/drive">SynologyDrive</a> &amp; <a href="https://www.idrive.com/">iDrive</a></p>

<h2 id="the-move" id="the-move">The Move</h2>

<p>Now that I have made my choices, I had to start moving stuff. I am going to document the software/services in the order I did the move.</p>

<h3 id="dynu-outbound-smtp-relay" id="dynu-outbound-smtp-relay">Dynu Outbound SMTP Relay</h3>

<p>This was the first thing I wanted to replace. Previously I had a proton-bridge container running in my kube cluster, and had various services pointing to it for sending email to myself and others. To replace this, I set up a new account with them. Then I set up a new kube deployment running a <a href="https://hub.docker.com/r/namshi/smtp">namshi/smtp container</a>, with the environment variables configured to forward to the dynu relay. I put this in so if I move in the future I can just update the config here once, instead of going to my various services to update their smtp config.</p>

<p>The most time was spent realising I needed to have a subdomain for the relay, and then had to attach SPF and whatever records against that, while still making sure I didn’t screw up mail sending with my ProtonMail.</p>

<h3 id="vaultwarden" id="vaultwarden">Vaultwarden</h3>

<p>This was another container deployment (actually a <a href="https://github.com/guerzon/vaultwarden/blob/main/charts/vaultwarden/README.md">helm chart</a>). The install had no real issues, but afterwards there are a couple minor things:</p>
<ol><li><p>The desktop Bitwarden app won’t log in. I tried the Flatpak and AppImage, as well as the Fedora-built version. It just bombs out and says “an unexpected error has occured”. The android app and firefox plugin work, so besides looking at the issue pages of both bitwarden and vaultwarden, I haven’t done much digging.
<strong>Update:</strong> This was because I had <a href="https://github.com/jaredallard/ingress-anubis">ingress-anubis</a> in front of the vaultwarden instance. The desktop app must use a user agent that anubis considers an interactive session, so it tries to do its stuff and that doesn’t work. Switching to just the normal ingress-nginx ingress fixed it.</p></li>

<li><p>Exporting from ProtonPass and importing to Vaultwarden doesn’t include passkeys. I think this is by design, because the passkeys have to be tied to the passkey software? I’m not sure, but I need to go through and set up a new set of passkeys. I haven’t done this yet, but I’m hopeful the protonpass interface lets me list all passkeys so I know where I need to go set up new ones.</p></li></ol>

<h3 id="mullvad" id="mullvad">Mullvad</h3>

<p>Mullvad doesn’t use accounts like a standard provider, you just get a random number and that’s all you can use. You cannot lose that number, because it’s not tied to an email or anything, so if you lose the number, there’s no recovery.</p>

<p>Mullvad’s Wireguard config generator worked fine. It’d be nice if I could relabel the keys it generates but no big deal.</p>

<p>Mullvad’s Android App was picking up the wrong language for some reason. I had English (Australia) and Español (Mexico) as my options, and from what I can tell the app didn’t support English (Australia) so it would default to Español. I stopped setting Español (Mexico) as an available language in my phone and it started working in English again.</p>

<p>Mullvad w/ my VPN’d bittorrent setup is interesting. I am definitely using my mullvad wireguard connection for torrenting, but it’s showing it’s not firewalled at all (which presumably it should be), and uploads are HEAPS faster. In addition, torrents that previously weren’t connecting for seeding are connecting. So I don’t get it, but it works, and works well.</p>

<h3 id="tuta" id="tuta">Tuta</h3>

<p>This was more of an adventure, largely due to the migration of existing mail.</p>

<p>I initially signed up for Tuta’s Revolutionary tier, which gives me custom domains (up to three; I only need one). To import old mail, though, you have to sign up for the Legend tier. Presumably I could sign up for a month, import all my mail and then go back to Revolutionary, but I had already purchased a year of Revolutionary (stupid move), so upgrading to Legend for the year made more sense with the pro-rata. I’m lucky I’m able to absorb the cost, but it’s something to keep in mind.</p>

<p>Anyway, proton provides an <a href="https://proton.me/support/proton-mail-export-tool">email exporter</a>. This dumps all your emails into a directory in .eml format (1/email), as well as metadata in json (1/email). This can be a ton of files depending on how much mail you have in your Proton account (and I imported my old gmail into Proton so it was a lot).  The filenames are encrypted long names (I think hashes of the email content to ensure uniqueness? not sure).</p>

<p>The importer for Tuta requires the Legend tier, as mentioned, but also you have to run the desktop client to do the import. I had trouble getting the desktop client flatpak to work. It would bomb out when trying to log in. The AppImage version worked. The flatpak is called (Beta) so I guess it’s still a WIP. When I have more time I’m going to do a bit more testing and maybe check the issues for Tuta and see if it’s new or a known thing.</p>

<p>Now that the desktop client was functional and I was upgraded to the Legend tier, it was time to do the import. I run Fedora 42 KDE right now, so when I went to select the email files to import, it uses the Dolphin browser. Selecting tens of thousands of eml files with long ~90-character long names just causes Dolphin to complain, so the directory needs to be split up a bit. To that end, I did the following things:</p>
<ul><li>Deleted all the metadata.json files. They aren’t used. You can’t just rm *.metadata.json if you have too many files, it’ll be too long of a command for your shell/rm. So:
<code>find . -name ‘*.metadata.json’ -exec rm {} \;</code></li>
<li>Sorted all the entries into subdirectories based off the first character in their filename:
<code>for i in a b c d e f g h i j k l m n o p q r s t u v w x y z A B C D E F G H I J K L M N O P Q R S T U V W X Y Z _ 0 1 2 3 4 5 6 7 8 9;</code>
<code>do</code>
<code>mkdir $i;</code>
<code>mv $i*.eml $i;</code>
<code>done</code></li>
<li>Renamed all the files so they’re shorter and Dolphin doesn’t barf
<code>for i in a b c d e f g h i j k l m n o p q r s t u v w x y z A B C D E F G H I J K L M N O P Q R S T U V W X Y Z _ 0 1 2 3 4 5 6 7 8 9;</code>
<code>do</code>
<code>var=0</code>
<code>for eml in $i/*.eml;</code>
<code>do</code>
<code>mv $eml $i/$var.eml</code>
<code>((var++));</code>
<code>done</code>
<code>done</code></li></ul>

<p>There’s probably better ways to get that range of letters/numbers but whatever. I only had to type them out once then could copy/paste as needed. This gave me 63 directories with 800-900 files in each, with the files named 0.eml, 1.eml … 890.eml, etc. Also the root directory has ~900 files in it that start with –, which I didn’t put in the overall <code>for</code> loop to avoid accidentally passing a filename as an argument to <code>mv</code> and breaking stuff… But… they still needed to be renamed, so I had to end up doing it anyway.</p>

<p>From here, I could select all the (previously named) -*.eml files and import them. Then, I had to go to each subdirectory and import each set of files in 800-900 file blocks using clickops in the Tuta desktop app. It took time (in fact, I wrote the majority of this while I kept alt-tabbing every few minutes to process the next block).</p>

<p>Once the mail was done, it was a simple matter to export the Proton Calendar and Contacts (into .ics and .vcf files) and import them into Tuta.</p>

<h2 id="the-pricing" id="the-pricing">The Pricing</h2>

<p>I moved off Proton for ideological reasons, but I thought it would be worth examining the price differences, if any. I haven’t actually priced this out seriously, so this will be new to me too. I will do this in Euro since the majority of the services I’m comparing are in Euro. I am not including ProtonDrive because I never really used it, besides as a backup to my existing stuff, so the existing stuff exists in both cases.</p>

<p>Proton Unlimited (1 year) Subscription: <strong>95.19€</strong>. I think that’s slightly cheaper because I bought two years at once, because looking on their site the equivalent for a new subscriber for 1y would be <strong>120€</strong>.</p>

<p>Tuta (1 year) Subscription Revolutionary: <strong>36€</strong>, but I had to upgrade to Legendary, and did it for a year, so that becomes <strong>96€</strong>. I should be able to drop back down to Revolutionary next year. If you go down this route I suggest just starting with a month of Legendary, get the email imported, then go by the year for Revolutionary if you’re able.</p>

<p>Mullvad (1 year): <strong>60€</strong></p>

<p>Dynu (1 year): **8.50€</p>

<p>Totalling
**
Proton: <strong>95€ (me) / 120€ (new)</strong>
Replacements: <strong>164.5€ (first year) / 104.5 (future years)</strong></p>

<p>So, more expensive right now due to my eagerness to do year subscriptions, and fairly competitive after that, while not supporting the AI/crypto BS Proton are doing. I’m happy with it.</p>

<h2 id="final-thoughts" id="final-thoughts">Final Thoughts</h2>

<p>There’s not many ways we can vote with our wallet these days. Everything is amalgamated into this common evil that everyone does. That isn’t the case with Tuta, and to some extent, Proton, but I fear Proton is headed that way. This has been an interesting experience, and bugs are still apparent in both Proton and its replacements, but it’s more than functional.</p>

<p>I enjoy this nerd stuff so moving over was about 8-10 hours (spread over two days) of doing setups/migrations.</p>

<p>I subscribe to and use <a href="https://kagi.com/">Kagi</a> for searching, and they provide an AI product too, so I questioned myself about why I was OK with their AI product (not using it, but that they’re offering it) and not Proton. I think it’s because of the other points against Proton. All together it gives me an overall picture where they’re going to be sliding into AI in everything, while Kagi keeps stuff very separate and it doesn’t seem to hurt their main product (a spectacular search experience).</p>
]]></content:encoded>
      <guid>https://blog.joyrex.net/moving-on-from-proton</guid>
      <pubDate>Mon, 28 Jul 2025 06:04:25 +0000</pubDate>
    </item>
    <item>
      <title>Screaming into the void</title>
      <link>https://blog.joyrex.net/screaming-into-the-void?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[I am not sure what I’m going to write here. With everything going on in the world right now (cliche phrase, but COME ON), I decided to try and write down some off my angst and stress and depression. As it is, I’m fairly lucky right now, and not heavily effected (yet). Many people are much much much worse off than I am from this, but this is just the beginning, so it’s about what’s coming down the road as much as it’s what’s currently happening.&#xA;&#xA;!--more--&#xA;&#xA;I’ve been listening to The Orb a lot lately, specifically the album I first bought of theirs back in the mid-late 90s: Orblivion.  Besides being good ambient, spaced out music, I have fond memories of listening to this while trying to get through Hazy Maze Cave in Mario64. It’s funny/good how music and smells can associate to specific memories like that. Anyway, I wonder if I am subconsciously getting back into The Orb, and this album in particular, because it brings my brain back to a quieter time of sitting in front of the TV playing video games and not giving a shit about anything else, although I was a teenager and probably had all the teenager stresses back then. They haven’t filtered through, though. I have Orblivion on right now as I write this, as it seems like good music to write stuff too. No real lyrics to pay attention to. Anyway….&#xA;&#xA;The US is dead, it just hasn’t realised it yet. It’s been dead for a while, but everything seems to have amped up to a million in the last month, and it was already running hot before that. I think I want to try and examine how it died from my view, all the signs along the way that the wheels were coming off, and the killers who would ignore or even exploit the signs in order to get rich, or the myopic fools who had a genuine belief that a short-term fix was going to be a stepping stone to a better America (and world, being the world’s leading superpower).&#xA;&#xA;I believe it started in the 70s with the rise of neo-liberalism as a backlash to the good life (for a majority, DEFINITELY not everyone) that seemed to form after The New Deal, but I am writing this from my view of the world, and that didn’t really hit until the 90s. I would say I was a know-nothing back then, but I remember in a debate class mentioning American Exceptionalism (although I didn’t know the term back then) to the surprise of my teacher when we were debating gas prices in class. I wasn’t a debate nerd, I just needed an extra class to fill, but that memory sticks in my head and makes me think “oh maybe I’ve always been disillusioned by America” even though I came from a white middle-upper class family, what America was heavily benefiting at the time.&#xA;&#xA;Anyway, the turn of the century really crystalised stuff for me. This may be because of various things:&#xA;&#xA;Turning 18 in April 2000&#xA;Dot com crash of 1999/2000s&#xA;2000 Election + the hanging chad scandal&#xA;Moving to Australia in Feb 2001 / living with various different cultures in student residences&#xA;9/11 in 2001&#xA;&#xA;I think a combination of all that really started giving me a different view of the world. I got out of the bubble of “America is the best!” that most Americans were/are insulated in, even unknowingly.  It’s constant propaganda on TV, driving down the highway, etc.  Some people embrace the nationalism, even more-so now, but the US is deadly to so many and is not a thing to be celebrated. I still had it ingrained in me. Right after 9/11 I remember setting my MSN Messenger name to “Operation Enduring Freedom”, encouraged by “our” response to Afghanistan. I remember my Dad messaging me saying “so I guess you’re for it, then?” or something along those lines. I think he knew it was bad, but I had to figure it out for myself.&#xA;&#xA;So while I was being “deprogrammed”, I was still learning about the world in general. And every time I learned something new, it’d incorporate into my world view, and bit by bit, the patriotic love of the US was fading.  Instead I was discovering how other countries aren’t worse than the US just by virtue of not being the US. In fact, I was learning how horrible the US was, and how cruel it was to large swathes of its people, and how they were even worse to people that weren’t Americans. But hey, the Daily Show was helping us sort this out.&#xA;&#xA;I question if The Daily Show (and later The Colbert Report) was a net gain or net loss for the US. It was real good at pointing out the hypocrisies and shittiness of the US politicians, but it also promoted the West Wing-type (I’ve never seen the show, just know from context) debate of “well if you just point out where they are assuming something incorrectly, they’ll fix their view.”  The Daily Show (and Colbert Report) were not prepared for politicians to be complete shitheads for ideological or greediness reasons.  They thought everyone had good in them, and that if they were just given a kind hand, they’d come good. That is clearly not the case. They also promoted the idea that politicians would accurately vote or represent what their constituents wanted. That is also clearly not the case. So for a long while in the 2000s the answer to the creeping death of the US was “vote harder and debate the shitheads, giving them a voice”.  Meanwhile, the US and the world got worse. As long as we vote for the Democrat, we’ll end up being OK! Was the DS/CR promoting solutions within the system because that’s all the writers could see? They were still owned by a major conglomerate (Viacom) which was profiting nicely off the status quo… so maybe they were directed to gin up outrage, but only so much, and only on one side, so there could be a steady flow of money. I don’t know, but either way, the time of the 2000s lead to an angry base for the Democrats to motivate and organise.  Enter Obama.&#xA;&#xA;If I had to pick one person who has convinced me the most that the Democrats are a lost cause, it’d be Obama.  He ran an amazing ground game in 2008. Hope and change and all that, amazing grassroots organising. The people were fired up. The people were ready to change America! He won by a good margin! Let’s fix the US! Then he ripped up his grassroots network. The Great Recession was happening (not his fault) and his response was to reward the companies involved and jail no one of value (100% his fault). He completely capitulated to the exact same companies that ruined the lives of millions of US citizens (and non-citizens, that shouldn’t matter). Meanwhile, he pushed the Affordable Care Act (Obamacare). Originally a good concept, it included a provision for single-payer, meaning universal health care. This is unambiguously a good thing. But the GOP fought back and held his feet to the fire, and he crumbled. Suddenly the insurance companies are involved and have a say in what it will look like. In fact, they were courted. The Democratic curse of “run to the right in compromise” poisoned what could have been truly groundbreaking legislation. Obamacare did ultimately pass, and it was OK, and it helped some people. Ironically, a lot of the people it helped would never vote Democrat, so even as a vote-getting exercise, it was mid.  It could have been so much more, though, if Democrats had stuck to their guns. But that’s not Obama’s style. This is the man that had a “beer summit” when a racist cop arrested a black guy in his own home. His solution was to kowtow to the right, whether out of cowardice or he just sucks, and make the guy that was FULLY THE AGGRIEVED PARTY sit down with the racist shithead that arrested him, like both were at fault somehow so both had to swallow some pride and sit down together. Bullshit.&#xA;&#xA;So that defines a lot of Obama’s term. Constant capitulation and minor improvements that help a few and give the Dems something to try and differentiate themselves from the GOP, however slight. Meanwhile, the right is getting more conservative, more racist, more selfish. The overton window is heavily drifting to the right, by a base that is more angry and more ruthless about getting what they want. So we’ve got weak Democrats with nothing good to show, at this point running off their own smugness, and a strong, angry base in the GOP. Hello, 2016.&#xA;&#xA;A successor to Obama was a big thing, he was a breath of fresh air to many who didn’t really have any dogs in the fight and were more worried about civility. The Daily Show/West Wing attitude of “we can debate” is now firmly in the minds of the Democratic party leaders, and most of the non-leaders, while the GOP is stripping copper from the walls. Who can pick up the mantle and lead the Democratic party, and the US, into a world were things don’t get worse, and maybe, just maybe, improve a little? How about the wife of an ex-president from another era? Someone who defines “civility politics”? Hillary.&#xA;&#xA;The scheming of the DNC to place their chosen successor on the throne probably doesn’t need much detail, but I will say that by doing this, it satisfied the people within the DNC, but on the outside, it really showed the raw corrupt core of the Democrats to the world. The Dems saw Trump as an easy win, so took the opportunity to try and cement their power in the DNC, as obviously they would be the leader of the US next, and should be able to run their party how they want. This didn’t go well, of course. They couldn’t beat a multiple-bankruptcy game show host who was on the record of sexually assaulting women (and maybe even rape? I can’t remember at this point). The Dems were right, this SHOULD have been a slam dunk, but internal power plays and a disconnect from the people of America spelt their doom. I could have told you it was coming. People were screaming from the rooftops that it was coming, but the Dems at this point just ran off smugness. They want all the same money-making corruption the GOP has, but to appear above it… it doesn’t work. They’re shown to be the very hypocrites Daily Show had spent a decade showing us in the GOP.  Panic mode set in, and the Democratic party decided to handle it how they always do: no new ideas, double down, capitulate further.  Suddenly they are paring down bills before they even get to a vote. Then of course, these neutered bills are sent out for a vote, and the GOP demands even more concessions. It’s an unending cycle of weakness and pushing rightward.&#xA;&#xA;Now is about the time I am completely disillusioned with America. Seeing the black heart of the party that is supposed to be the progressive party kills all hope. The GOP has won. America is in trouble.&#xA;&#xA;Since then it has been more of the same. Fetterman, Sinema, Biden, Pelosi, Schumer, Feinstein. The Democratic administration was actively encouraging and supporting a genocide, and was proud of it. People like “The Squad” try to give a vaguely left voice, but are constantly sabotaged by their own party, even when what they’re pushing has already been cut down. The same playbook the GOP used against the Dems is now being used by the Dems against anyone who doesn’t support the corrupt core of the Dems.&#xA;&#xA;What has their response been? More smugness, no self examination. People saying we need to vote more, need to get out the vote, need to SEND THE DEMOCRATS MONEY. These people are millionaires or billionaires off legalised insider trading, but they need my $20. You’ve gotta be kdding me.&#xA;&#xA;So now Trump et al is in charge, and no one is around to fight him. AOC posts some good stuff but it’s just words. The Dems are lining up to support GOP picks for positions, no resistance, no fight. It’s over. The US is dead, it just hasn’t realised it yet.&#xA;&#xA;OK maybe it’s not dead, but its salvation doesn’t lie in the Democratic party. What comes next is anyone’s guess. Every day Trump and team are stripping more and more rights from people, and destroying lives. Sometimes even with the encouragement of the so called “progressive party”. I don’t know what comes next, but I now the Democratic party won’t be the thing that stands up to it, that fights it. If anyone, it will be grassroots mutual aid communities that provide the most resistance. The Democrats will try to co-opt it like they do any popular movement they think they can profit off of, but they need to be rejected in the strongest possible terms. At this point the Dems are experts at poisoning a movement from the inside, it’s most of the career DNC members strive for. They cannot be allowed to get a foothold on what comes next, or it will all be for naught.&#xA;&#xA;While all this is happening, and the US makes stupid move after stupid move, where it sits on the world stage will change.  It’s been declining for a long time, but has still generally been the de-facto world leader.  China has been making major inroads though (literally in some places).  They’re providing other countries with actual infrastructure and support and help, not demands.  When the US recently left the World Health Organization, China said they’d step in and give the funding that the US was withdrawing. I believe that since then some US-based billionaire has stepped in and said he will kick in, which I gotta assume he sees as a protective play from letting China take over the world, where his status would be greatly diminished. What’s amazing is people are lining up to thank the billionaire, like he didn’t make all that money off exploiting people. Capitalism is built on exploitation. The more money you have the worse you are. Billionaires are the worst.&#xA;&#xA;Anyway, all this to say that time is ticking, and the US is accelerating its decline, both internationally and nationally. I think at this point I welcome it, however I don’t know how badly people are going to be fucked over in the meantime. I don’t know what China would be like as a world leader. They have problems, but I don’t think they’re any worse than the US at this point. It’s hard for me to accurately judge them because of the years of western propaganda we’re given about them, but the more I learn about the western world and how this supposed “better than china” utopia works, the more I see the (western) emperor has no clothes. I am curious to see what the world looks like under China, and hopeful that a change of that magnitude could lead to better outcomes, because right now there’s nothing.&#xA;&#xA;At the end of all this is people. People have been getting the rough end of this, and will continue. People are suffering. People will continue to suffer. The US won’t save us. The Democrats won’t save us. China won’t save us. Target, Costco, AOC, Musk, Bill Gates, etc.. no one is saving us. It’s up to us to try and support each other. Mutual-aid networks are what keep people going. It can feel overwhelming, so many people need help, but mutual aid is where every little bit helps, much more-so than some millionaire politician’s slush fund. It’s hard to know who to help, how to help, and how much to help… but those are personal decisions that you make yourself, and the answer is: “whatever you feel comfortable with”. You are the only one to answer to. So give, help people, and feel good you’ve made a concrete difference in someone’s life. Do what you are comfortable with, don’t let yourself get bullied about what you do or don’t give, but by that measure, no bragging about what you do or don’t give. How you contribute is your business and should be done for your own desire to help, not for recognition. Mutual aid is how we fight the shitheads. It’s how we help each other. It’s how we shine a bit of light in a face of overwhelming darkness.&#xA;&#xA;OK, I think I’m done screaming for now. Orblivion is almost at the end of the second CD (full of remixes and stuff). Do I feel better? I don’t know. The world is still horrible and getting worse, but we all do what we can to get by, with a emphasis on helping others who are currently worse-off.&#xA;&#xA;❤️]]&gt;</description>
      <content:encoded><![CDATA[<p>I am not sure what I’m going to write here. With everything going on in the world right now (cliche phrase, but COME ON), I decided to try and write down some off my angst and stress and depression. As it is, I’m fairly lucky right now, and not heavily effected (yet). Many people are much much much worse off than I am from this, but this is just the beginning, so it’s about what’s coming down the road as much as it’s what’s currently happening.</p>



<p>I’ve been listening to The Orb a lot lately, specifically the album I first bought of theirs back in the mid-late 90s: Orblivion.  Besides being good ambient, spaced out music, I have fond memories of listening to this while trying to get through Hazy Maze Cave in Mario64. It’s funny/good how music and smells can associate to specific memories like that. Anyway, I wonder if I am subconsciously getting back into The Orb, and this album in particular, because it brings my brain back to a quieter time of sitting in front of the TV playing video games and not giving a shit about anything else, although I was a teenager and probably had all the teenager stresses back then. They haven’t filtered through, though. I have Orblivion on right now as I write this, as it seems like good music to write stuff too. No real lyrics to pay attention to. Anyway….</p>

<p>The US is dead, it just hasn’t realised it yet. It’s been dead for a while, but everything seems to have amped up to a million in the last month, and it was already running hot before that. I think I want to try and examine how it died from my view, all the signs along the way that the wheels were coming off, and the killers who would ignore or even exploit the signs in order to get rich, or the myopic fools who had a genuine belief that a short-term fix was going to be a stepping stone to a better America (and world, being the world’s leading superpower).</p>

<p>I believe it started in the 70s with the rise of neo-liberalism as a backlash to the good life (for a majority, DEFINITELY not everyone) that seemed to form after The New Deal, but I am writing this from my view of the world, and that didn’t really hit until the 90s. I would say I was a know-nothing back then, but I remember in a debate class mentioning American Exceptionalism (although I didn’t know the term back then) to the surprise of my teacher when we were debating gas prices in class. I wasn’t a debate nerd, I just needed an extra class to fill, but that memory sticks in my head and makes me think “oh maybe I’ve always been disillusioned by America” even though I came from a white middle-upper class family, what America was heavily benefiting at the time.</p>

<p>Anyway, the turn of the century really crystalised stuff for me. This may be because of various things:</p>
<ul><li>Turning 18 in April 2000</li>
<li>Dot com crash of 1999/2000s</li>
<li>2000 Election + the hanging chad scandal</li>
<li>Moving to Australia in Feb 2001 / living with various different cultures in student residences</li>
<li>9/11 in 2001</li></ul>

<p>I think a combination of all that really started giving me a different view of the world. I got out of the bubble of “America is the best!” that most Americans were/are insulated in, even unknowingly.  It’s constant propaganda on TV, driving down the highway, etc.  Some people embrace the nationalism, even more-so now, but the US is deadly to so many and is not a thing to be celebrated. I still had it ingrained in me. Right after 9/11 I remember setting my MSN Messenger name to “Operation Enduring Freedom”, encouraged by “our” response to Afghanistan. I remember my Dad messaging me saying “so I guess you’re for it, then?” or something along those lines. I think he knew it was bad, but I had to figure it out for myself.</p>

<p>So while I was being “deprogrammed”, I was still learning about the world in general. And every time I learned something new, it’d incorporate into my world view, and bit by bit, the patriotic love of the US was fading.  Instead I was discovering how other countries aren’t worse than the US just by virtue of not being the US. In fact, I was learning how horrible the US was, and how cruel it was to large swathes of its people, and how they were even worse to people that weren’t Americans. But hey, the Daily Show was helping us sort this out.</p>

<p>I question if The Daily Show (and later The Colbert Report) was a net gain or net loss for the US. It was real good at pointing out the hypocrisies and shittiness of the US politicians, but it also promoted the West Wing-type (I’ve never seen the show, just know from context) debate of “well if you just point out where they are assuming something incorrectly, they’ll fix their view.”  The Daily Show (and Colbert Report) were not prepared for politicians to be complete shitheads for ideological or greediness reasons.  They thought everyone had good in them, and that if they were just given a kind hand, they’d come good. That is clearly not the case. They also promoted the idea that politicians would accurately vote or represent what their constituents wanted. That is also clearly not the case. So for a long while in the 2000s the answer to the creeping death of the US was “vote harder and debate the shitheads, giving them a voice”.  Meanwhile, the US and the world got worse. As long as we vote for the Democrat, we’ll end up being OK! Was the DS/CR promoting solutions within the system because that’s all the writers could see? They were still owned by a major conglomerate (Viacom) which was profiting nicely off the status quo… so maybe they were directed to gin up outrage, but only so much, and only on one side, so there could be a steady flow of money. I don’t know, but either way, the time of the 2000s lead to an angry base for the Democrats to motivate and organise.  Enter Obama.</p>

<p>If I had to pick one person who has convinced me the most that the Democrats are a lost cause, it’d be Obama.  He ran an amazing ground game in 2008. Hope and change and all that, amazing grassroots organising. The people were fired up. The people were ready to change America! He won by a good margin! Let’s fix the US! Then he ripped up his grassroots network. The Great Recession was happening (not his fault) and his response was to reward the companies involved and jail no one of value (100% his fault). He completely capitulated to the exact same companies that ruined the lives of millions of US citizens (and non-citizens, that shouldn’t matter). Meanwhile, he pushed the Affordable Care Act (Obamacare). Originally a good concept, it included a provision for single-payer, meaning universal health care. This is unambiguously a good thing. But the GOP fought back and held his feet to the fire, and he crumbled. Suddenly the insurance companies are involved and have a say in what it will look like. In fact, they were courted. The Democratic curse of “run to the right in compromise” poisoned what could have been truly groundbreaking legislation. Obamacare did ultimately pass, and it was OK, and it helped some people. Ironically, a lot of the people it helped would never vote Democrat, so even as a vote-getting exercise, it was mid.  It could have been so much more, though, if Democrats had stuck to their guns. But that’s not Obama’s style. This is the man that had a “beer summit” when a racist cop arrested a black guy in his own home. His solution was to kowtow to the right, whether out of cowardice or he just sucks, and make the guy that was FULLY THE AGGRIEVED PARTY sit down with the racist shithead that arrested him, like both were at fault somehow so both had to swallow some pride and sit down together. Bullshit.</p>

<p>So that defines a lot of Obama’s term. Constant capitulation and minor improvements that help a few and give the Dems something to try and differentiate themselves from the GOP, however slight. Meanwhile, the right is getting more conservative, more racist, more selfish. The overton window is heavily drifting to the right, by a base that is more angry and more ruthless about getting what they want. So we’ve got weak Democrats with nothing good to show, at this point running off their own smugness, and a strong, angry base in the GOP. Hello, 2016.</p>

<p>A successor to Obama was a big thing, he was a breath of fresh air to many who didn’t really have any dogs in the fight and were more worried about civility. The Daily Show/West Wing attitude of “we can debate” is now firmly in the minds of the Democratic party leaders, and most of the non-leaders, while the GOP is stripping copper from the walls. Who can pick up the mantle and lead the Democratic party, and the US, into a world were things don’t get worse, and maybe, just maybe, improve a little? How about the wife of an ex-president from another era? Someone who defines “civility politics”? Hillary.</p>

<p>The scheming of the DNC to place their chosen successor on the throne probably doesn’t need much detail, but I will say that by doing this, it satisfied the people within the DNC, but on the outside, it really showed the raw corrupt core of the Democrats to the world. The Dems saw Trump as an easy win, so took the opportunity to try and cement their power in the DNC, as obviously they would be the leader of the US next, and should be able to run their party how they want. This didn’t go well, of course. They couldn’t beat a multiple-bankruptcy game show host who was on the record of sexually assaulting women (and maybe even rape? I can’t remember at this point). The Dems were right, this SHOULD have been a slam dunk, but internal power plays and a disconnect from the people of America spelt their doom. I could have told you it was coming. People were screaming from the rooftops that it was coming, but the Dems at this point just ran off smugness. They want all the same money-making corruption the GOP has, but to appear above it… it doesn’t work. They’re shown to be the very hypocrites Daily Show had spent a decade showing us in the GOP.  Panic mode set in, and the Democratic party decided to handle it how they always do: no new ideas, double down, capitulate further.  Suddenly they are paring down bills before they even get to a vote. Then of course, these neutered bills are sent out for a vote, and the GOP demands even more concessions. It’s an unending cycle of weakness and pushing rightward.</p>

<p>Now is about the time I am completely disillusioned with America. Seeing the black heart of the party that is supposed to be the progressive party kills all hope. The GOP has won. America is in trouble.</p>

<p>Since then it has been more of the same. Fetterman, Sinema, Biden, Pelosi, Schumer, Feinstein. The Democratic administration was actively encouraging and supporting a genocide, and was proud of it. People like “The Squad” try to give a vaguely left voice, but are constantly sabotaged by their own party, even when what they’re pushing has already been cut down. The same playbook the GOP used against the Dems is now being used by the Dems against anyone who doesn’t support the corrupt core of the Dems.</p>

<p>What has their response been? More smugness, no self examination. People saying we need to vote more, need to get out the vote, need to SEND THE DEMOCRATS MONEY. These people are millionaires or billionaires off legalised insider trading, but they need my $20. You’ve gotta be kdding me.</p>

<p>So now Trump et al is in charge, and no one is around to fight him. AOC posts some good stuff but it’s just words. The Dems are lining up to support GOP picks for positions, no resistance, no fight. It’s over. The US is dead, it just hasn’t realised it yet.</p>

<p>OK maybe it’s not dead, but its salvation doesn’t lie in the Democratic party. What comes next is anyone’s guess. Every day Trump and team are stripping more and more rights from people, and destroying lives. Sometimes even with the encouragement of the so called “progressive party”. I don’t know what comes next, but I now the Democratic party won’t be the thing that stands up to it, that fights it. If anyone, it will be grassroots mutual aid communities that provide the most resistance. The Democrats will try to co-opt it like they do any popular movement they think they can profit off of, but they need to be rejected in the strongest possible terms. At this point the Dems are experts at poisoning a movement from the inside, it’s most of the career DNC members strive for. They cannot be allowed to get a foothold on what comes next, or it will all be for naught.</p>

<p>While all this is happening, and the US makes stupid move after stupid move, where it sits on the world stage will change.  It’s been declining for a long time, but has still generally been the de-facto world leader.  China has been making major inroads though (literally in some places).  They’re providing other countries with actual infrastructure and support and help, not demands.  When the US recently left the World Health Organization, China said they’d step in and give the funding that the US was withdrawing. I believe that since then some US-based billionaire has stepped in and said he will kick in, which I gotta assume he sees as a protective play from letting China take over the world, where his status would be greatly diminished. What’s amazing is people are lining up to thank the billionaire, like he didn’t make all that money off exploiting people. Capitalism is built on exploitation. The more money you have the worse you are. Billionaires are the worst.</p>

<p>Anyway, all this to say that time is ticking, and the US is accelerating its decline, both internationally and nationally. I think at this point I welcome it, however I don’t know how badly people are going to be fucked over in the meantime. I don’t know what China would be like as a world leader. They have problems, but I don’t think they’re any worse than the US at this point. It’s hard for me to accurately judge them because of the years of western propaganda we’re given about them, but the more I learn about the western world and how this supposed “better than china” utopia works, the more I see the (western) emperor has no clothes. I am curious to see what the world looks like under China, and hopeful that a change of that magnitude could lead to better outcomes, because right now there’s nothing.</p>

<p>At the end of all this is people. People have been getting the rough end of this, and will continue. People are suffering. People will continue to suffer. The US won’t save us. The Democrats won’t save us. China won’t save us. Target, Costco, AOC, Musk, Bill Gates, etc.. no one is saving us. It’s up to us to try and support each other. Mutual-aid networks are what keep people going. It can feel overwhelming, so many people need help, but mutual aid is where every little bit helps, much more-so than some millionaire politician’s slush fund. It’s hard to know who to help, how to help, and how much to help… but those are personal decisions that you make yourself, and the answer is: “whatever you feel comfortable with”. You are the only one to answer to. So give, help people, and feel good you’ve made a concrete difference in someone’s life. Do what you are comfortable with, don’t let yourself get bullied about what you do or don’t give, but by that measure, no bragging about what you do or don’t give. How you contribute is your business and should be done for your own desire to help, not for recognition. Mutual aid is how we fight the shitheads. It’s how we help each other. It’s how we shine a bit of light in a face of overwhelming darkness.</p>

<p>OK, I think I’m done screaming for now. Orblivion is almost at the end of the second CD (full of remixes and stuff). Do I feel better? I don’t know. The world is still horrible and getting worse, but we all do what we can to get by, with a emphasis on helping others who are currently worse-off.</p>

<p>❤️</p>
]]></content:encoded>
      <guid>https://blog.joyrex.net/screaming-into-the-void</guid>
      <pubDate>Sat, 01 Feb 2025 00:09:48 +0000</pubDate>
    </item>
    <item>
      <title>The Silent Library on Various Platforms</title>
      <link>https://blog.joyrex.net/the-silent-library-on-various-platforms?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[I decided to experience some nostalgia and learn some new useless stuff by playing with some different file sharing platforms, new and old. I also played some platforms that are specifically not made for file sharing.&#xA;&#xA;To do this, I use The Silent Library’s libraries. This is both the main library (subbed content) and the Raw Wing, which holds raws that typesetters and subbers use. I explain a bit of what The Silent Library and Gaki no Tsukai is in this blog post. I love the original show and the catalogued associated shows so much, they are so fun. Full credit to Bipedal for curating and numerous typesetters (the unsung heroes) and subbers (the more-sung heroes) for all their work.&#xA;&#xA;All the method belows are not the best way to get TSL files, unless you are already using the platform. Then maybe it’s the way to go. In this blog post I’ll discuss what I’ve set up, and any quick notes on challenges/problems. The page that indexes the various protocols/networks is at https://tsl.joyrex.net. The below is more about how the setup went and how to access the data.&#xA;&#xA;!--more--&#xA;&#xA;Notes&#xA;&#xA;Once again, the page that indexes the various protocols/networks is at https://tsl.joyrex.net. Also, some of the systems below have chat servers built in. These aren’t for that. They’re unmonitored and the systems are likely not really set up (by me) to really have proper chat controls. I mainly worried about getting the files shared. There is a discord that is used for Gaki chat, as well as the GakiNoTsukai subreddit.&#xA;&#xA;If you have questions, contact me at ejstacey@joyrex.net, @ejstacey.joyrex.net on bluesky, or @ejstacey@kolektiva.social on mastodon.&#xA;&#xA;If you want a better way of getting an up to date version of TSL or TSL Raw Wing, contact Bipedal on https://thesilentlibrary.com.&#xA;&#xA;Modern File Sharing&#xA;&#xA;IPFS&#xA;&#xA;Details&#xA;&#xA;I’ve documented my IPFS journey here and here. It has been an interesting and frustrating time, but the end result is I have a system that keeps up to date. The server runs on Debian 11, manual download of kubo (and systemd file). I also have a systemd timer to run my scripts and keep the database up to date (maximum time out of date: 2h).&#xA;&#xA;Accessing&#xA;&#xA;Go to the following to get the current IPFS id of the library you want to access. It is located here &#xA;&#xA;The Silent Library&#xA;The Silent Library Raw Wing&#xA;&#xA;After that, Get IPFS Desktop and open it. Go to the Files area of the program and put in the address you got from the webpage above.&#xA;&#xA;You’ll notice the links above seem to link to a current web-based version of the libraries. This is because it’s going through the IPFS gateway I set up, which lets people on the normal web access the IPFS network. The problem is that the proxy sucks with large files, as it has to “download” it from the IPFS network and then re-serve it to you. On large files, your HTTP call is extremely likely to time out before it has downloaded it all. Still, the links above are a way to access the library, as well.&#xA;&#xA;DC++&#xA;&#xA;Details&#xA;&#xA;I debated putting this under a category below, but despite its age (started in 1999), it is still going strong (seemingly by Russian pirates? I don’t know).&#xA;&#xA;For this one I manually compiled Ptokax on Debian 11 and got it going with systemd. After that, I installed the AirDC++ Web Client Docker container and configured it to host the libraries, then to connect to the Ptokax instance.&#xA;&#xA;This was a damn nightmare. Port forwarding, IP detecting, everything just reminded me how lucky we are to live in a more modern age.&#xA;&#xA;Accessing&#xA;&#xA;Get some DC++ software like the original DC++ (open source) and connect to hub.joyrex.net:4111. The ‘ejstacey’ user has both libraries under its Share folder.&#xA;&#xA;Old File Sharing&#xA;&#xA;BBS (Synchronet)&#xA;&#xA;Details&#xA;&#xA;God I love BBSs. This runs Synchronet (still updated) in a docker container and I think I’m going to keep working on it and customising it because it’s so nostalgic, but I have added the files in two libraries, TSL and TSL-RAWS. The BBS is loosely based on the BBS I ran back in the early 90s as a teen.&#xA;&#xA;Accessing&#xA;&#xA;Get something like SyncTerm and telnet to tsl.joyrex.net:2323. You could telnet with another app, but SyncTerm works great and has the needed support for ZMODEM (and others) which you need to do the downloads.&#xA;&#xA;Create an account on first connect. Put in BS info if you want.. I don’t care. I will further tune it in the future to not ask dumb stuff. I recommend using the Reneclone/Renegade Clone interface, as those are the menus I am starting to update. Go to the file section.&#xA;&#xA;IRC / XDCC&#xA;&#xA;Details&#xA;&#xA;Work in progress, but I’m going to run a nothing IRC server and just have an eggdrop bot with XDCC Server script in place.&#xA;&#xA;Accessing&#xA;&#xA;IRC client that supports XDCC. More info when I have it in place.&#xA;&#xA;Protocols that have no business file sharing&#xA;&#xA;Gopher&#xA;&#xA;Details&#xA;&#xA;If you don’t know Gopher), it’s been around since 1991 and was a potential way to browse the Internet for information until HTTP/the World Wide Web ran away with it. Since then, it’s faded into obscurity. It’s made for serving text information, so of course it’s worth making it serve multi-gigabyte videos of a subtitled Japanese show.&#xA;&#xA;Accessing&#xA;&#xA;Get a gopher client that properly handles both spaces and downloading files. This may be difficult. One I found is Gopher Browser for Windows which works pretty well!&#xA;&#xA;Connect to gopher://tsl.joyrex.net&#xA;&#xA;Gemini&#xA;&#xA;Details&#xA;&#xA;Gemini) is like a modern version of gopher. It also exists to serve text content. It’s also unhappy serving 5+TB of video content. It’s funny.&#xA;&#xA;Accessing&#xA;&#xA;Get a gemini client that properly handles both spaces and downloading files. This may be difficult. The GemiNaut application definitely does not handle spaces well. The Agregore application handles spaces but doesn’t have a download option, just gives the raw binrary content on the screen. Amfora is console based, but extremely nice and works well.&#xA;&#xA;Connect to gemini://tsl.joyrex.net]]&gt;</description>
      <content:encoded><![CDATA[<p>I decided to experience some nostalgia and learn some new useless stuff by playing with some different file sharing platforms, new and old. I also played some platforms that are specifically not made for file sharing.</p>

<p>To do this, I use <a href="https://thesilentlibrary.com/">The Silent Library</a>’s libraries. This is both the main library (subbed content) and the Raw Wing, which holds raws that typesetters and subbers use. I explain a bit of what The Silent Library and Gaki no Tsukai is in <a href="https://blog.joyrex.net/the-silent-library-on-ipfs">this blog post</a>. I love the original show and the catalogued associated shows so much, they are so fun. Full credit to Bipedal for curating and numerous typesetters (the unsung heroes) and subbers (the more-sung heroes) for all their work.</p>

<p>All the method belows are not the best way to get TSL files, unless you are already using the platform. Then maybe it’s the way to go. In this blog post I’ll discuss what I’ve set up, and any quick notes on challenges/problems. The page that indexes the various protocols/networks is at <a href="https://tsl.joyrex.net">https://tsl.joyrex.net</a>. The below is more about how the setup went and how to access the data.</p>



<h3 id="notes" id="notes">Notes</h3>

<p>Once again, the page that indexes the various protocols/networks is at <a href="https://tsl.joyrex.net">https://tsl.joyrex.net</a>. Also, some of the systems below have chat servers built in. These aren’t for that. They’re unmonitored and the systems are likely not really set up (by me) to really have proper chat controls. I mainly worried about getting the files shared. There is a discord that is used for Gaki chat, as well as the GakiNoTsukai subreddit.</p>

<p>If you have questions, contact me at ejstacey@joyrex.net, @ejstacey.joyrex.net on bluesky, or <a href="/@/ejstacey@kolektiva.social" class="u-url mention">@<span>ejstacey@kolektiva.social</span></a> on mastodon.</p>

<p>If you want a better way of getting an up to date version of TSL or TSL Raw Wing, contact Bipedal on <a href="https://thesilentlibrary.com">https://thesilentlibrary.com</a>.</p>

<h3 id="modern-file-sharing" id="modern-file-sharing">Modern File Sharing</h3>

<h4 id="ipfs" id="ipfs">IPFS</h4>

<h5 id="details" id="details">Details</h5>

<p>I’ve documented my IPFS journey <a href="https://blog.joyrex.net/the-silent-library-on-ipfs">here</a> and <a href="https://blog.joyrex.net/the-silent-library-on-ipfs-part-2">here</a>. It has been an interesting and frustrating time, but the end result is I have a system that keeps up to date. The server runs on Debian 11, manual download of kubo (and systemd file). I also have a systemd timer to run my scripts and keep the database up to date (maximum time out of date: 2h).</p>

<h5 id="accessing" id="accessing">Accessing</h5>

<p>Go to the following to get the current IPFS id of the library you want to access. It is located here <img src="https://i.snap.as/fl3Htlfw.png" alt=""/></p>
<ul><li><a href="https://ipfs.joyrex.net/ipns/k51qzi5uqu5dlrs4wb1uu1q6opfluygq2z9iawn1lcy885xjn0k1yx9a3hdc1m/">The Silent Library</a></li>
<li><a href="https://ipfs.joyrex.net/ipns/k51qzi5uqu5dhmfwbaelaaa7uwzq9b9si2zpiy4k9nft3z4joqpxac1ocfmit9/">The Silent Library Raw Wing</a></li></ul>

<p>After that, Get <a href="https://docs.ipfs.tech/install/ipfs-desktop/">IPFS Desktop</a> and open it. Go to the Files area of the program and put in the address you got from the webpage above.</p>

<p>You’ll notice the links above seem to link to a current web-based version of the libraries. This is because it’s going through the IPFS gateway I set up, which lets people on the normal web access the IPFS network. The problem is that the proxy sucks with large files, as it has to “download” it from the IPFS network and then re-serve it to you. On large files, your HTTP call is extremely likely to time out before it has downloaded it all. Still, the links above are a way to access the library, as well.</p>

<h4 id="dc" id="dc">DC++</h4>

<h5 id="details-1" id="details-1">Details</h5>

<p>I debated putting this under a category below, but despite its age (started in 1999), it is still going strong (seemingly by Russian pirates? I don’t know).</p>

<p>For this one I manually compiled <a href="http://www.ptokax.org/">Ptokax</a> on Debian 11 and got it going with systemd. After that, I installed the <a href="https://registry.hub.docker.com/r/gangefors/airdcpp-webclient/">AirDC++ Web Client Docker container</a> and configured it to host the libraries, then to connect to the Ptokax instance.</p>

<p>This was a damn nightmare. Port forwarding, IP detecting, everything just reminded me how lucky we are to live in a more modern age.</p>

<h5 id="accessing-1" id="accessing-1">Accessing</h5>

<p>Get some DC++ software like the <a href="https://dcplusplus.sourceforge.io/">original DC++</a> (open source) and connect to hub.joyrex.net:4111. The ‘ejstacey’ user has both libraries under its Share folder.</p>

<h3 id="old-file-sharing" id="old-file-sharing">Old File Sharing</h3>

<h4 id="bbs-synchronet" id="bbs-synchronet">BBS (Synchronet)</h4>

<h5 id="details-2" id="details-2">Details</h5>

<p>God I love BBSs. This runs Synchronet (still updated) in a docker container and I think I’m going to keep working on it and customising it because it’s so nostalgic, but I have added the files in two libraries, TSL and TSL-RAWS. The BBS is loosely based on the BBS I ran back in the early 90s as a teen.</p>

<h5 id="accessing-2" id="accessing-2">Accessing</h5>

<p>Get something like SyncTerm and telnet to tsl.joyrex.net:2323. You could telnet with another app, but <a href="https://syncterm.bbsdev.net/">SyncTerm</a> works great and has the needed support for ZMODEM (and others) which you need to do the downloads.</p>

<p>Create an account on first connect. Put in BS info if you want.. I don’t care. I will further tune it in the future to not ask dumb stuff. I recommend using the Reneclone/Renegade Clone interface, as those are the menus I am starting to update. Go to the file section.</p>

<h3 id="irc-xdcc" id="irc-xdcc">IRC / XDCC</h3>

<h5 id="details-3" id="details-3">Details</h5>

<p>Work in progress, but I’m going to run a nothing IRC server and just have an eggdrop bot with XDCC Server script in place.</p>

<h5 id="accessing-3" id="accessing-3">Accessing</h5>

<p>IRC client that supports XDCC. More info when I have it in place.</p>

<h3 id="protocols-that-have-no-business-file-sharing" id="protocols-that-have-no-business-file-sharing">Protocols that have no business file sharing</h3>

<h4 id="gopher" id="gopher">Gopher</h4>

<h5 id="details-4" id="details-4">Details</h5>

<p>If you don’t know <a href="https://en.wikipedia.org/wiki/Gopher_(protocol)">Gopher</a>, it’s been around since 1991 and was a potential way to browse the Internet for information until HTTP/the World Wide Web ran away with it. Since then, it’s faded into obscurity. It’s made for serving text information, so of course it’s worth making it serve multi-gigabyte videos of a subtitled Japanese show.</p>

<h5 id="accessing-4" id="accessing-4">Accessing</h5>

<p>Get a gopher client that properly handles both spaces and downloading files. This may be difficult. One I found is <a href="http://www.jaruzel.com/gopher/gopher-client-browser-for-windows/">Gopher Browser for Windows</a> which works pretty well!</p>

<p>Connect to gopher://tsl.joyrex.net</p>

<h4 id="gemini" id="gemini">Gemini</h4>

<h5 id="details-5" id="details-5">Details</h5>

<p><a href="https://en.wikipedia.org/wiki/Gemini_(protocol)">Gemini</a> is like a modern version of gopher. It also exists to serve text content. It’s also unhappy serving 5+TB of video content. It’s funny.</p>

<h5 id="accessing-5" id="accessing-5">Accessing</h5>

<p>Get a gemini client that properly handles both spaces and downloading files. This may be difficult. The GemiNaut application definitely does not handle spaces well. The <a href="https://agregore.mauve.moe/">Agregore</a> application handles spaces but doesn’t have a download option, just gives the raw binrary content on the screen. <a href="https://github.com/makew0rld/amfora">Amfora</a> is console based, but extremely nice and works well.</p>

<p>Connect to gemini://tsl.joyrex.net</p>
]]></content:encoded>
      <guid>https://blog.joyrex.net/the-silent-library-on-various-platforms</guid>
      <pubDate>Tue, 17 Oct 2023 03:12:30 +0000</pubDate>
    </item>
    <item>
      <title>The Silent Library on IPFS Part 2</title>
      <link>https://blog.joyrex.net/the-silent-library-on-ipfs-part-2?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[IPFS for TSL 2&#xA;&#xA;This is a continuation from the first blog post.&#xA;&#xA;I’ve spent a lot of time writing some better code to handle keeping the on-disk version of The Silent Library (or anything) up to date with what IPFS sees/knows about.  This means it scans both libraries and adds/removes files to/from IPFS as needed.&#xA;&#xA;Through this experience, I wrote a script in python, and a script in go.  The go one is unfinished, but the python one is working.  In both cases I did this in a “dumb” way, where I wrote all the code to scan and compare libraries, got a clean list of differences/changes that need to be made, then looked at uploading files.  This is where I got stuck.  This is for various reasons: I am new to writing serious stuff in both languages, the IPFS doco is poor, and IPFS itself seems to be in constant flux.  This experience has shown me that IPFS is not a good solution for sharing The Silent Library, or possibly any large project.  Below I will list what I’ve done since the original blog post, as well as further details of the problems I hit trying to explore the IPFS ecosystem.&#xA;&#xA;The Setup was documented soon after the first blog post. The IPFS / Summary sections where documented a couple months later.&#xA;&#xA;!--more--&#xA;&#xA; &#xA;&#xA;The Setup&#xA;&#xA;These steps could be useful to someone who is knowledgeable and wants to&#xA;&#xA;At some point, docker (I think) was becoming too slow to access the files.  They were pinned locally but they still would take forever to download through the gateway, which was on the same docker instance.  I don’t get it.  Anyway, I decided to put it on my Windows server. In addition, the Windows server has a 10gig fibre connection to the NAS, so that’s fun.&#xA;&#xA;To do this I downloaded kubo. I tried to run it as a service, but they don’t fork in daemon mode (I think), so I decided to try this the other way, using IPFS Desktop software.  This runs as a desktop app, so you must be logged in.  It sits in your systray and can be set to start when you log in.&#xA;&#xA;After that, I installed Python 3.11 with winget and installed requirements for the script with pip.&#xA;&#xA;I checked out the code, set up my config file, and kicked it off.  It took days to set up the IPFS version of what was on disk, but it did finally get there.  I set up the appropriate port forward (TCP/UDP 4001) through my router to this machine and off we go.  I also set up the reverse proxy to sit in front of the HTTP gateway (8080).&#xA;&#xA;Steps&#xA;&#xA;These could be adapted to other OSs, and I bet systemd could actually run the server as a real service (update: since writing this, I’ve moved to kubo on a Debian 11 server with a custom service file I wrote to run ipfs daemon as a regular user).&#xA;&#xA; Install and run IPFS Desktop&#xA;&#xA; Set up port forwarding so 4001 TCP/UDP are available publicly.&#xA;&#xA; Using nginx or apache or IIS or something, set up a reverse proxy in front of 8080 TCP. (IPFS Gateway docs – I use Path type).&#xA;&#xA; Do NOT make port 5001 publicly available.&#xA;&#xA; Install Python 3&#xA;&#xA; Clone the repo holding the code.&#xA;&#xA; Using the installed python, use pip and install the files in requirements.txt&#xA;    python3 -m pip -r requirements.txt.&#xA;&#xA; Set up your config file.&#xA;&#xA;    Copy settings.cfg.example to settings.cfg (or tsl.cfg, whatever).&#xA;&#xA;    Make changes to the file you just copied to.  Some notes:&#xA;&#xA;       IPFS has limitations, so your \\[remote\] tslDirectory\ has to be a subdirectory under where IPFS stores its configuration. This means to make it work you should have a writable directory IPFS can access, and TSL has to be under it somewhere.  To save disk space you could symlink or bind mount it into that area (depending on what’s hosting your copy of TSL).&#xA;&#xA;       \\[options\] refresh\ should always be True unless you’re debugging or know what you’re doing.&#xA;&#xA;       \\[remote\] ipnsKeyName\ is a unique name you set. It’s tied to your ipfs instance.&#xA;&#xA; On the IPFS Desktop icon in the systray, right click on it, go to Advanced, and choose Move Repository Location.  Choose the IPFS directory you configured in the previous step.  It will quickly move over.&#xA;&#xA;10. In the IPFS Desktop app, go to Settings on the left.  At the bottom is the config file text.  Go to ‘Experimental’ and look for ‘FilestoreEnabled’.  Set that to value to ‘true’.  Save the file and restart the service (right click the systray icon to do the restart).&#xA;&#xA;11. If everything has worked, IPFS is ready to be populated.  Go to where the code was checked out and run: python3 .\\sync-tsl-to-ipfs.py –config tsl.cfg&#xA;&#xA;    Use whatever config file you named&#xA;&#xA;Output should look like this:&#xA;&#xA;Then you’ll see it creating directories on the MFS filesystem and adding files.&#xA;&#xA;It can take days due to IPFS limitations.&#xA;&#xA;The final thing it does when done is “publishes” the current root directory of /The Silent Library (or whatever you specified in your config file) to a permanent id (using IPFS’s IPNS system).  This means that when future updates are done, the hash of the root directory can change, but people can always look at the IPNS location to find the current version of the root directory.  You can use this URL in the gateway systems.&#xA;&#xA;An example of this working is at https://tsl.joyrex.net/&#xA;&#xA;Anyone who does this will end up contributing to the “seeders”.  Their IPNS won’t be used, but all the stuff under it will be shared, so when someone grabs a file, some comes from me, and some comes from whoever is seeding.&#xA;&#xA; &#xA;&#xA;IPFS Issues and Why It Isn’t Suitable&#xA;&#xA;IPFS relies heavily on having its own copy of whatever you are sharing, splint up in chunks.  The only reason I could get this far was because of an experimental feature called “filestores”, which lets you use a backing of a real filesystem and it only holds its internal metadata to allow the system to work.&#xA;&#xA;That said, the API seems to require uploading the entire file when you want to add the file to IPFS/MFS, even though it is not storing it anywhere and is backing onto a filesystem.  This makes massive adding take way longer than needed.&#xA;&#xA;Speaking of the API, it seems to be in constant flux, with documentation for various versions of what you should use, but it’s not actually useful.  The API doco is good in that it seems to generally include the parameters to calls with a terse description, but the examples (when they exist) are generic to the point of useless.  In addition, there’s constant references to referring to their examples, but these “examples” also appear to be their test cases for their code, and as such, are written in a very abstract way that isn’t useful at all for someone just trying to explore.  I can appreciate looking at code to learn vs someone having to write a blog page for newbies, but the code examples just aren’t useful unless you’re deep into the ipfs ecosystem.&#xA;&#xA;I wrote my script in Python and in Go.  The go instance I stopped working on once I saw the weird way you have to send files to the API (multipart mime with specific headers.. and again.. chunk streaming the file).&#xA;&#xA;There are various complaints, mainly around people being expected to have a deep understanding of the internal ipfs system to interact with it, and its use, but I’m tired and done with it.  I am keeping my instance going for fun (and since I started writing this article, it’s been a couple months and I’ve since moved to running it on Debian because IPFS Desktop screwed up on an upgrade).&#xA;&#xA; &#xA;&#xA;Summary&#xA;&#xA;IPFS is a cool idea, and could be extremely powerful for certain things, but it’s largely limited by its own massive scope, and the large amount of breaking changes and/or documentation something like this requires.&#xA;&#xA;I’m still going to keep TSL going, on IPFS, just for kicks, but it’s not a good solution for everyone.]]&gt;</description>
      <content:encoded><![CDATA[<p>IPFS for TSL 2</p>

<p>This is a continuation from <a href="https://blog.joyrex.net/the-silent-library-on-ipfs">the first blog post</a>.</p>

<p>I’ve spent a lot of time writing some better code to handle keeping the on-disk version of The Silent Library (or anything) up to date with what IPFS sees/knows about.  This means it scans both libraries and adds/removes files to/from IPFS as needed.</p>

<p>Through this experience, I wrote a script in python, and a script in go.  The go one is unfinished, but the python one is working.  In both cases I did this in a “dumb” way, where I wrote all the code to scan and compare libraries, got a clean list of differences/changes that need to be made, then looked at uploading files.  This is where I got stuck.  This is for various reasons: I am new to writing serious stuff in both languages, the IPFS doco is poor, and IPFS itself seems to be in constant flux.  This experience has shown me that IPFS is not a good solution for sharing The Silent Library, or possibly any large project.  Below I will list what I’ve done <a href="https://blog.joyrex.net/the-silent-library-on-ipfs">since the original blog post</a>, as well as further details of the problems I hit trying to explore the IPFS ecosystem.</p>

<p>The Setup was documented soon after the first blog post. The IPFS / Summary sections where documented a couple months later.</p>



<p> </p>

<h3 id="the-setup" id="the-setup">The Setup</h3>

<p>These steps could be useful to someone who is knowledgeable and wants to</p>

<p>At some point, docker (I think) was becoming too slow to access the files.  They were pinned locally but they still would take forever to download through the gateway, which was on the same docker instance.  I don’t get it.  Anyway, I decided to put it on my Windows server. In addition, the Windows server has a 10gig fibre connection to the NAS, so that’s fun.</p>

<p>To do this I downloaded kubo. I tried to run it as a service, but they don’t fork in daemon mode (I think), so I decided to try this the other way, using IPFS Desktop software.  This runs as a desktop app, so you must be logged in.  It sits in your systray and can be set to start when you log in.</p>

<p>After that, I installed Python 3.11 with winget and installed requirements for the script with pip.</p>

<p>I checked out the code, set up my config file, and kicked it off.  It took days to set up the IPFS version of what was on disk, but it did finally get there.  I set up the appropriate port forward (TCP/UDP 4001) through my router to this machine and off we go.  I also set up the reverse proxy to sit in front of the HTTP gateway (8080).</p>

<h4 id="steps" id="steps">Steps</h4>

<p>These could be adapted to other OSs, and I bet systemd could actually run the server as a real service (update: since writing this, I’ve moved to kubo on a Debian 11 server with a custom service file I wrote to run ipfs daemon as a regular user).</p>
<ol><li><p>Install and run <a href="https://docs.ipfs.tech/install/ipfs-desktop/">IPFS Desktop</a></p></li>

<li><p>Set up port forwarding so 4001 TCP/UDP are available publicly.</p></li>

<li><p>Using nginx or apache or IIS or something, set up a reverse proxy in front of 8080 TCP. (<a href="https://docs.ipfs.tech/concepts/ipfs-gateway/">IPFS Gateway</a> docs – I use Path type).</p></li>

<li><p>Do <strong>NOT</strong> make port 5001 publicly available.</p></li>

<li><p>Install Python 3</p></li>

<li><p>Clone the <a href="https://github.com/ejstacey/ipfs-tsl-tools">repo</a> holding the code.</p></li>

<li><p>Using the installed python, use pip and install the files in requirements.txt
python3 -m pip -r requirements.txt.</p></li>

<li><p>Set up your config file.</p>
<ol><li><p>Copy settings.cfg.example to settings.cfg (or tsl.cfg, whatever).</p></li>

<li><p>Make changes to the file you just copied to.  Some notes:</p>
<ol><li><p>IPFS has limitations, so your `[remote] tslDirectory` has to be a subdirectory under where IPFS stores its configuration. This means to make it work you should have a writable directory IPFS can access, and TSL has to be under it somewhere.  To save disk space you could symlink or bind mount it into that area (depending on what’s hosting your copy of TSL).</p></li>

<li><p>`[options] refresh` should always be True unless you’re debugging or know what you’re doing.</p></li>

<li><p>`[remote] ipnsKeyName` is a unique name you set. It’s tied to your ipfs instance.</p></li></ol></li></ol></li>

<li><p>On the IPFS Desktop icon in the systray, right click on it, go to Advanced, and choose Move Repository Location.  Choose the IPFS directory you configured in the previous step.  It will quickly move over.</p></li>

<li><p>In the IPFS Desktop app, go to Settings on the left.  At the bottom is the config file text.  Go to ‘Experimental’ and look for ‘FilestoreEnabled’.  Set that to value to ‘true’.  Save the file and restart the service (right click the systray icon to do the restart).</p></li>

<li><p>If everything has worked, IPFS is ready to be populated.  Go to where the code was checked out and run: python3 .\sync-tsl-to-ipfs.py –config tsl.cfg</p>
<ol><li>Use whatever config file you named</li></ol></li></ol>

<p>Output should look like this:</p>

<p><img src="https://i.snap.as/1Reb8w61.png" alt=""/></p>

<p>Then you’ll see it creating directories on the MFS filesystem and adding files.</p>

<p>It can take days due to IPFS limitations.</p>

<p>The final thing it does when done is “publishes” the current root directory of /The Silent Library (or whatever you specified in your config file) to a permanent id (using IPFS’s IPNS system).  This means that when future updates are done, the hash of the root directory can change, but people can always look at the IPNS location to find the current version of the root directory.  You can use this URL in the gateway systems.</p>

<p>An example of this working is at <a href="https://tsl.joyrex.net/">https://tsl.joyrex.net/</a></p>

<p>Anyone who does this will end up contributing to the “seeders”.  Their IPNS won’t be used, but all the stuff under it will be shared, so when someone grabs a file, some comes from me, and some comes from whoever is seeding.</p>

<p> </p>

<h3 id="ipfs-issues-and-why-it-isn-t-suitable" id="ipfs-issues-and-why-it-isn-t-suitable">IPFS Issues and Why It Isn’t Suitable</h3>

<p>IPFS relies heavily on having its own copy of whatever you are sharing, splint up in chunks.  The only reason I could get this far was because of an experimental feature called “filestores”, which lets you use a backing of a real filesystem and it only holds its internal metadata to allow the system to work.</p>

<p>That said, the API seems to require uploading the entire file when you want to add the file to IPFS/MFS, even though it is not storing it anywhere and is backing onto a filesystem.  This makes massive adding take way longer than needed.</p>

<p>Speaking of the API, it seems to be in constant flux, with documentation for various versions of what you should use, but it’s not actually useful.  The API doco is good in that it seems to generally include the parameters to calls with a terse description, but the examples (when they exist) are generic to the point of useless.  In addition, there’s constant references to referring to their examples, but these “examples” also appear to be their test cases for their code, and as such, are written in a very abstract way that isn’t useful at all for someone just trying to explore.  I can appreciate looking at code to learn vs someone having to write a blog page for newbies, but the code examples just aren’t useful unless you’re deep into the ipfs ecosystem.</p>

<p>I wrote my script in Python and in Go.  The go instance I stopped working on once I saw the weird way you have to send files to the API (multipart mime with specific headers.. and again.. chunk streaming the file).</p>

<p>There are various complaints, mainly around people being expected to have a deep understanding of the internal ipfs system to interact with it, and its use, but I’m tired and done with it.  I am keeping my instance going for fun (and since I started writing this article, it’s been a couple months and I’ve since moved to running it on Debian because IPFS Desktop screwed up on an upgrade).</p>

<p> </p>

<h3 id="summary" id="summary">Summary</h3>

<p>IPFS is a cool idea, and could be extremely powerful for certain things, but it’s largely limited by its own massive scope, and the large amount of breaking changes and/or documentation something like this requires.</p>

<p>I’m still going to keep TSL going, on IPFS, just for kicks, but it’s not a good solution for everyone.</p>
]]></content:encoded>
      <guid>https://blog.joyrex.net/the-silent-library-on-ipfs-part-2</guid>
      <pubDate>Sun, 15 Oct 2023 03:28:26 +0000</pubDate>
    </item>
    <item>
      <title>The Silent Library on IPFS</title>
      <link>https://blog.joyrex.net/the-silent-library-on-ipfs?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[This blog post explains what Gaki is, what IPFS is, and how I’ve combined them based off the excellent work of others.&#xA;&#xA;!--more--&#xA;&#xA;Gaki No Tsukai&#xA;&#xA;I love Gaki No Tsukai. This Japanese variety show with some of Japan’s best comedians, both as members and as guests, is amazing.  They are probably most well known in the west for their Batsu (punishment) Games. Most of the popular Batsu Games are them being in a situation for a day (newspapermen, nurses, cops, students, etc) where they aren’t allowed to laugh.  If they laugh, they get slapped on the butt (normally).  It’s hilarious.&#xA;&#xA;Gaki has various people and groups that also love the show and put in an AMAZING amount of effort doing subtitles for Gaki and Gaki-related stuff (other shows, commercials, etc) so non-Japanese-speakers can enjoy it.  This is all content we’d never be able to experience without these volunteers, who have been working for DECADES to get stuff translated. It’s a truly amazing feat, with different people shifting in and out of the scene during the years.  There’s various places that try to offer some or all of the translated work.  Here’s some of the big ones:&#xA;&#xA;Team Gaki: When yearly batsu games were being done by Gaki, Team Gaki took on the monumental effort to get them translated (as well as other random episodes)&#xA;The Silent Library: An attempt to catalogue and organise ALL translated work done for Gaki and Gaki-adjacent shows/media.  It has Mega as a backend for grabbing individual episodes and releases a torrent once or twice a year to let people grab an updated full archive. Recently they’ve also been offering a repository for raws (trimmed, un-translated versions) of Japanese show so people that do the timing (specifying when subtitles show and what they look like) and translating (translating and filling out the subtitle sections set up by the timer) to work with.  The closest thing we have to an authoritative source for English (and some others) translations of Gaki and Gaki-related shows.&#xA;Chiki Chiki Tube: A Peertube instance built to allow streaming of things in The Silent Library (and more!).  Excellent resource when you just want to watch clips without having to download anything.&#xA;&#xA;Unmentioned are various current and past timers, translators, QA people, organisers, web sites, etc.  People have volunteered so much time and effort and resources to get something out for everyone to enjoy.  It’s beautiful.&#xA;&#xA;IPFS&#xA;&#xA;I hadn’t used IPFS before, but it seemed cool.  It backends onto a bittorrent (or bittorrent-like) p2p network to break up files into chunks and distribute them among peers.  Sounds like bittorrent so far, but each file is like an individual torrent, so you don’t need a torrent file to hold a group.  The features of ipfs allows you to specify directories, so you can link to an ipfs directory to get a list of ipfs files under that directory, then download those files, and so on.&#xA;&#xA;One of the benefits of ipfs is they have set up a gateway system, so people can access IPFS files via HTTP in their normal browser.  People can run their own gateway servers, but there’s also some big ones out there like Cloudflare.&#xA;&#xA;It’s worth mentioning that IPFS has been around before “web3” existed, but they seem to have somewhat dived into all that BS. They’re using it for cryptocurrency or something, I don’t know.  It seems to somehow be linked to Ethereum.  I don’t know and I don’t care.  I’m ignoring all that shit.  I am focusing on using it what it’s meant for..  a p2p distributed filesystem.&#xA;&#xA;Gaki For All&#xA;&#xA;As I see it, there are three types of people that want to interact with Gaki, and what is offered by the various providers.  Here’s a table of my thoughts:&#xA;&#xA;I see IPFS as being a possible alternative to the Mega portion of the TSL, giving us a way to offer individual downloads to people.  This can be useful if for some reason we don’t want or can’t use Mega (I believe right now there is a donator helping fund the Mega account for the library!  This community is so good!).&#xA;&#xA;This comes with some risks. People need to use IPFS natively if they want to contribute back to “seeding” the filesystem.  The way IPFS works is some people have “pinned” files, which are always provided from that node.  In IPFS, when someone accesses a file (using IPFS Desktop or similar apps), they then serve that file for a while (there’s settings on how much gets locally cached and shared).  If people access files via an HTTP gateway, then that gateway will serve the file, but the individual downloader won’t contribute back.  This means if we don’t have enough people volunteering to offer their TSL libraries on IPFS, or not using IPFS, downloads could be slow.&#xA;&#xA;That said, if it’s not obvious yet, I’ve set up a TSL repo on IPFS and am serving it from my node.  I also set up a gateway that lets normal users browse and download files from a current version of the library.  You can see that here: https://tsl.joyrex.net.  Notice all the links refer to ipfs.joyrex.net, which is my gateway server.  Files on IPFS get identified by a unique id called a “cid”.  This cid is the hash string you find in the URLs on that page.&#xA;&#xA;This blog post (after all this intro stuff) will cover setting up an IPFS node in Docker, sharing the files on IPFS, and some extra notes.  This won’t apply perfectly for everyone, but at least should help anyone interested in starting down the path.&#xA;&#xA;Docker&#xA;&#xA;I am using Docker on my synology NAS, but any docker instance works.  You also don’t need to run docker, you just need ipfs running somewhere, including the ipfs cli program.  IPFS is a protocol, so there are multiple implementations of it. I am using kubo (was called go-ipfs).  It seems to be the biggest and most popular implementation.  Standard docs for setting up on docker are here.&#xA;&#xA;On Synology, it looks like this:&#xA;&#xA;Some notes:&#xA;&#xA;While various ports internally need to be exposed, the only one you want to port forward publicly (or upnp, but that doesn’t work for me) is port 4001.  You DO NOT want to forward 5001.  It allows unauthenticated root access to your ipfs.&#xA;  4001: The QUIC port used for ipfs p2p communication&#xA;  5001: The RPC API port used to control the node/see status/etc. There’s also a web interface at /webui.&#xA;  8080: The HTTP port for the IPFS gateway. You don’t need to map this if you don’t want to run a gateway. Use via /ipfs/\[cid\] in the URL.&#xA;  8081: The HTTPS port for the IPFS gateway. You don’t need to map this if you don’t want to run a gateway. If you’re running a reverse proxy you probably don’t even need this, just have it talk HTTP to 8080.&#xA;You want three locations to be mapped in, two read-write, and one read-only.&#xA;  /data/ipfs is used to store all the ipfs application config/etc&#xA;  /export is used to store data as it converts it to ipfs format (normally)&#xA;  /data/mounted-files/tsl is READ ONLY and a current copy of The Silent Library syncthing repo.  This means that TSL stays the only authoritative source.&#xA;The processes run in the docker image runs as uid 1000, gid 100, so the mapped folders have to have write access to those (except for the TSL folder which only needs read-only access)&#xA;&#xA;After starting the instance, it generates a default config and starts the daemon.  Now we need to open a shell session on the docker container to make some config changes.  You should have a root ‘sh’ session.&#xA;&#xA;IPFS Config&#xA;&#xA;In the shell session, run the following:&#xA;&#xA;\# ipfs config --json Experimental.FilestoreEnabled true&#xA;&#xA;This enables the filestore feature, which allows us to share files without being broken up into chunks on the filesystem (duplicating the data).  More information is here.&#xA;&#xA;Restart the node.&#xA;&#xA;add-tsl.sh&#xA;&#xA;Into the mapped /data/ipfs directory (wherever you’re mapping it from, put the file there, and it’ll appear in the container), put the file add-tsl.sh. Check the file source to be sure, it is set up like you want (you may want to alter the filename, or the mounted TSL directory).&#xA;&#xA;This script will go through TSL directory and add the files to the local ipfs store.  IPFS has the flat filestore, but it also allows building a directory structure in something it calls the Mutable File System. It creates directories and uploads the files into the directory, so the layout matches the original TSL.  It also uses the actual files to do the add instead of breaking files up and storing their chunks (duplicates the data).  This is why FilestoreEnabled had to be set earlier.&#xA;&#xA;Start another shell session on the node.  Run the following to test:&#xA;&#xA;ipfs cat /ipfs/QmYwAPJzv5CZsnA625s3Xf2nemtYgPpHdWEz79ojWnPbdG/readme&#xA;&#xA;This downloads a file from IPFS (a readme file)&#xA;&#xA;If that’s working, start the upload.  On my node it takes a couple hours to  upload everything to ipfs.&#xA;&#xA;cd /data/ipfs/; ./add-tsl.sh&#xA;&#xA;This will run and add the tsl libraries.  Watch the file hash-list.  You should see entries like:&#xA;&#xA;Ashita ga Aru sa:Ashita ga Aru sa E01.avi:QmWRwP4pdCrHrtysDj65ANT5kNupsddBBJrH8j3vZhKU8w&#xA;Ashita ga Aru sa:Ashita ga Aru sa E02.avi:QmRWhbcjqKbwPyiPiMbHcP8v5BYK8jMRwGm8N6cCp2G7AL&#xA;Ashita ga Aru sa:Ashita ga Aru sa E03.avi:QmWiRu95VBKmPd4BM7Fut4hcA1opJ9ymwhNZXm3UNbBo1R&#xA;Ashita ga Aru sa:Ashita ga Aru sa E04.avi:QmdMLtW7Nk8sCviTKRbyUcdtfZDcH7RGfsS6SuWE8hd2Pz&#xA;Ashita ga Aru sa:Ashita ga Aru sa E05.avi:QmXhHKM5s2xFFSRYgK9cB62qqCTAF7GntmW7oosKMw6zoW&#xA;Ashita ga Aru sa:Ashita ga Aru sa E06.avi:Qmd4K51kbaX7B2gzt8RKpsnZGRzxdtg6hyAKEfKgQ9ogk4&#xA;Ashita ga Aru sa:Ashita ga Aru sa E07.avi:QmVte4fZ5rhVdNFC25ZicNhzZWV5Hd58apyMP6DpwmMrbJ&#xA;Ashita ga Aru sa:Ashita ga Aru sa E08.mp4:QmbFq7w4kLTfLdTPZkzDxa7YEvEi7HaetKZEnGVf9YtuRY&#xA;Ashita ga Aru sa:Ashita ga Aru sa E09.mkv:QmSYx5kYPtvHc6buBmyEbkjgwgJrJtJ7WzB7Nvd8Kq7xRP&#xA;Ashita ga Aru sa:Ashita ga Aru sa E10.mp4:QmbwrxFK7v1kSqzLvaCpESdAEXJqBy8b3Wyp5JqW6wk18T&#xA;&#xA;This shows it’s working correctly. If you’re getting blank entries for the third field (fields are split by :), cancel the script, remove the hash-file, and restart the script.&#xA;&#xA;Errors will possibly pop up in the window the shell script is running that say:Error: to-files: cannot put node in path &#34;/The Silent Library/Documental/Documentary of Documental/Season 1/Translator Notes/Documentary of Documental S01E02 TN.txt&#34;: directory already has entry by that name                                    &#xA;&#xA;Error: to-files: cannot put node in path &#34;/The Silent Library/Documental/Documentary of Documental/Season 2/Translator Notes/Documentary of documental S02E04 TN.txt&#34;: directory already has entry by that name                                    &#xA;&#xA;Error: to-files: cannot put node in path &#34;/The Silent Library/Documental/Documentary of Documental/Season 2/Translator Notes/Documentary of documental S02E04 TN.txt&#34;: directory already has entry by that name &#xA;&#xA;This means it’s identified it’s already been set up with the MFS.  This is fine.&#xA;&#xA;The script can be run multiple times.  If it has already uploaded that file it’ll just report the hash of the file.&#xA;&#xA;When this finishes, you’re in the online cluster serving these files!&#xA;&#xA;(Optional) setting up a gateway&#xA;&#xA;Gateways can be used from anywhere (unless you put limits on it) to access any files on IPFS.  Because of this, existing gateways can be used to access to the TSL files.  If you would like to set up your own, though, it’s pretty easy.&#xA;&#xA;In a normal browser, access port 8080 (or if in docker, the equivalent mapped port) and access /ipfs/ QmPZ9gcCEpqKTo6aq61g2nXGUhM4iCL3ewB6LDXZCtioEB.  This is the cid of the readme file retrieved earlier.  Going to /ipfs/QmW7FFR7kJ6TraVU3G9MS12N6iUgr1gmyESYEYraArjogA will show you the root directory of the TSL collection.  From there users can access any of the current TSL files and download them, all over HTTP.&#xA;&#xA;My gateway (and a link to the TSL is): https://ipfs.joyrex.net/ipfs/QmW7FFR7kJ6TraVU3G9MS12N6iUgr1gmyESYEYraArjogA&#xA;&#xA;This can also be accessed from https://tsl.joyrex.net/&#xA;&#xA;(Optional) playing with the RPC web interface&#xA;&#xA;The 5001 port (which you should NEVER publicly make available) has a web interface available at /ipfs/.  For example, connect in your web browser to  localhost:5001/ipfs/ to access it (or if you’re in docker, whatever you mapped the port to).  If you’ve run add-tsl.sh, all your files should show up in the Files area.&#xA;&#xA;Random Thoughts&#xA;&#xA;This method, while a bit convoluted, allows us to share without having to re-upload to each other.  Syncthing is used to distribute once.  Then we run the add-tsl.sh script (or some form of it) every so often to add any changes to our shared files (probably a cron with a find -ctime to get the latest changes).&#xA;&#xA;I don’t know how it handles removing files yet. Well, I do know: it doesn’t. I still will have to implement something that removes files that exist in the MFS store but don’t exist in the library anymore.&#xA;&#xA;Before I realised that the gateway instance had a great MFS browser built in, I wrote (copied) some shitty code from an example site to capture and display the links my own way.  I also import the files and their associated codes into a DB table so I can refer against them in the future if needed.  All that code exists in a repo I put up with my scripts: ejstacey/ipfs-tsl-tools (github.com)&#xA;&#xA;Q&amp;A:&#xA;&#xA;Q: What about IPFS clusters (IPFS Cluster - Pinset orchestration for IPFS)?&#xA;&#xA;A: This could still be an option, however I think it might conflict with us each only using syncthing to get a copy of the new stuff once.  It’s more built for one node updating a file and it going out to the others over the IPFS network, which we avoid.&#xA;&#xA;Q: Why doesn’t this replace syncthing or the torrent or something else?&#xA;&#xA;A: Because everything used right now fits for its purpose. Syncthing is great because bipedal controls everything. We want him to be the curator. The torrent file is great because it allows people to get everything (up to a certain date) in one large collection using a popular format. IPFS isn’t good for either of those, but it IS handy for one-off grabs.&#xA;&#xA;Q: Is speed going to be an issue?&#xA;&#xA;A: Possibly.  As it’s peer-to-peer, people who are using an IPFS client will share some or all of the files they’ve grabbed (IPFS has garbage collection that desides when to stop sharing a file. the person can pin it to their local instance to make garbage collection ignore the file).  People who are using a gateway server don’t share back.  If there’s a large group of “constant seeders” like on the torrent file, it should go quick.  I know quite a few people who do syncthing also started seeding to the torrent file with their existing data, getting more seeders more quickly.  Maybe we could do something like that here if people are willing.  There’s also the possibility that if we use a gateway with a huge upload (like Cloudflare’s), it’ll pin the file locally on cloudflare’s instance, so for future people it’d be quick.  That last part is just a theory though.&#xA;&#xA;Q: Why?&#xA;&#xA;A: It seemed like a good project for me to play with, and it could help the community.  Mega is great, but does have limitations and costs.  I thought it could be good to have a backup, just in case.  Also bipedal has started thinking more about the TSL Raws wing, which is larger than the standard TSL (\~3.3TB and growing vs 1.5TB), and it may end up that those files can’t live on mega with the current plan(s), so needs another way for timers/subbers/etc to get the files without jumping on the entire syncthing (or whatever).&#xA;&#xA;If you want to know more, there’s a ton of doco out there on ipfs.  If there are questions find me at @ejstacey on discord, or @ejstacey.joyrex.net on bluesky, or @ejstacey@kolektiva.social on mastodon/activitypub.]]&gt;</description>
      <content:encoded><![CDATA[<p>This blog post explains what Gaki is, what IPFS is, and how I’ve combined them based off the excellent work of others.</p>



<h3 id="gaki-no-tsukai" id="gaki-no-tsukai">Gaki No Tsukai</h3>

<p>I love <a href="https://en.wikipedia.org/wiki/Downtown_no_Gaki_no_Tsukai_ya_Arahende!!">Gaki No Tsukai</a>. This Japanese variety show with some of Japan’s best comedians, both as members and as guests, is amazing.  They are probably most well known in the west for their Batsu (punishment) Games. Most of the popular Batsu Games are them being in a situation for a day (newspapermen, nurses, cops, students, etc) where they aren’t allowed to laugh.  If they laugh, they get slapped on the butt (normally).  It’s hilarious.</p>

<p>Gaki has various people and groups that also love the show and put in an AMAZING amount of effort doing subtitles for Gaki and Gaki-related stuff (other shows, commercials, etc) so non-Japanese-speakers can enjoy it.  This is all content we’d never be able to experience without these volunteers, who have been working for DECADES to get stuff translated. It’s a truly amazing feat, with different people shifting in and out of the scene during the years.  There’s various places that try to offer some or all of the translated work.  Here’s some of the big ones:</p>
<ul><li><a href="https://www.teamgaki.com/">Team Gaki</a>: When yearly batsu games were being done by Gaki, Team Gaki took on the monumental effort to get them translated (as well as other random episodes)</li>
<li><a href="https://thesilentlibrary.xyz/">The Silent Library</a>: An attempt to catalogue and organise ALL translated work done for Gaki and Gaki-adjacent shows/media.  It has Mega as a backend for grabbing individual episodes and releases a torrent once or twice a year to let people grab an updated full archive. Recently they’ve also been offering a repository for raws (trimmed, un-translated versions) of Japanese show so people that do the timing (specifying when subtitles show and what they look like) and translating (translating and filling out the subtitle sections set up by the timer) to work with.  The closest thing we have to an authoritative source for English (and some others) translations of Gaki and Gaki-related shows.</li>
<li><a href="https://chikichiki.tube/">Chiki Chiki Tube</a>: A Peertube instance built to allow streaming of things in The Silent Library (and more!).  Excellent resource when you just want to watch clips without having to download anything.</li></ul>

<p>Unmentioned are various current and past timers, translators, QA people, organisers, web sites, etc.  People have volunteered so much time and effort and resources to get something out for everyone to enjoy.  It’s beautiful.</p>

<h3 id="ipfs" id="ipfs">IPFS</h3>

<p>I hadn’t used IPFS before, but it seemed cool.  It backends onto a bittorrent (or bittorrent-like) p2p network to break up files into chunks and distribute them among peers.  Sounds like bittorrent so far, but each file is like an individual torrent, so you don’t need a torrent file to hold a group.  The features of ipfs allows you to specify directories, so you can link to an ipfs directory to get a list of ipfs files under that directory, then download those files, and so on.</p>

<p>One of the benefits of ipfs is they have set up a gateway system, so people can access IPFS files via HTTP in their normal browser.  People can run their own gateway servers, but there’s also some big ones out there like <a href="https://blog.cloudflare.com/distributed-web-gateway/">Cloudflare</a>.</p>

<p>It’s worth mentioning that IPFS has been around before “web3” existed, but they seem to have somewhat dived into all that BS. They’re using it for cryptocurrency or something, I don’t know.  It seems to somehow be linked to Ethereum.  I don’t know and I don’t care.  I’m ignoring all that shit.  I am focusing on using it what it’s meant for..  a p2p distributed filesystem.</p>

<h3 id="gaki-for-all" id="gaki-for-all">Gaki For All</h3>

<p>As I see it, there are three types of people that want to interact with Gaki, and what is offered by the various providers.  Here’s a table of my thoughts:</p>

<p><img src="https://i.snap.as/HZi05LuP.png" alt=""/></p>

<p>I see IPFS as being a possible alternative to the Mega portion of the TSL, giving us a way to offer individual downloads to people.  This can be useful if for some reason we don’t want or can’t use Mega (I believe right now there is a donator helping fund the Mega account for the library!  This community is so good!).</p>

<p>This comes with some risks. People need to use IPFS natively if they want to contribute back to “seeding” the filesystem.  The way IPFS works is some people have “pinned” files, which are always provided from that node.  In IPFS, when someone accesses a file (using IPFS Desktop or similar apps), they then serve that file for a while (there’s settings on how much gets locally cached and shared).  If people access files via an HTTP gateway, then that gateway will serve the file, but the individual downloader won’t contribute back.  This means if we don’t have enough people volunteering to offer their TSL libraries on IPFS, or not using IPFS, downloads could be slow.</p>

<p>That said, if it’s not obvious yet, I’ve set up a TSL repo on IPFS and am serving it from my node.  I also set up a gateway that lets normal users browse and download files from a current version of the library.  You can see that here: <a href="https://tsl.joyrex.net/">https://tsl.joyrex.net</a>.  Notice all the links refer to ipfs.joyrex.net, which is my gateway server.  Files on IPFS get identified by a unique id called a “cid”.  This cid is the hash string you find in the URLs on that page.</p>

<p>This blog post (after all this intro stuff) will cover setting up an IPFS node in Docker, sharing the files on IPFS, and some extra notes.  This won’t apply perfectly for everyone, but at least should help anyone interested in starting down the path.</p>

<h3 id="docker" id="docker">Docker</h3>

<p>I am using Docker on my synology NAS, but any docker instance works.  You also don’t need to run docker, you just need ipfs running somewhere, including the ipfs cli program.  IPFS is a protocol, so there are multiple implementations of it. I am using kubo (was called go-ipfs).  It seems to be the biggest and most popular implementation.  Standard docs for setting up on docker are <a href="https://docs.ipfs.tech/install/run-ipfs-inside-docker/">here</a>.</p>

<p>On Synology, it looks like this:</p>

<p><img src="https://i.snap.as/oGMcM6wZ.png" alt=""/></p>

<p><img src="https://i.snap.as/aNf8Kchc.png" alt=""/></p>

<p><img src="https://i.snap.as/ES69uTX8.png" alt=""/></p>

<p>Some notes:</p>
<ul><li>While various ports internally need to be exposed, the <strong>only</strong> one you want to port forward publicly (or upnp, but that doesn’t work for me) is port 4001.  You <strong>DO NOT</strong> want to forward 5001.  It allows unauthenticated root access to your ipfs.
<ul><li>4001: The QUIC port used for ipfs p2p communication</li>
<li>5001: The RPC API port used to control the node/see status/etc. There’s also a web interface at /webui.</li>
<li>8080: The HTTP port for the IPFS gateway. You don’t need to map this if you don’t want to run a gateway. Use via /ipfs/[cid] in the URL.</li>
<li>8081: The HTTPS port for the IPFS gateway. You don’t need to map this if you don’t want to run a gateway. If you’re running a reverse proxy you probably don’t even need this, just have it talk HTTP to 8080.</li></ul></li>
<li>You want three locations to be mapped in, two read-write, and one read-only.
<ul><li>/data/ipfs is used to store all the ipfs application config/etc</li>
<li>/export is used to store data as it converts it to ipfs format (normally)</li>
<li>/data/mounted-files/tsl is <strong>READ ONLY</strong> and a current copy of The Silent Library syncthing repo.  This means that TSL stays the only authoritative source.</li></ul></li>
<li>The processes run in the docker image runs as uid 1000, gid 100, so the mapped folders have to have write access to those (except for the TSL folder which only needs read-only access)</li></ul>

<p>After starting the instance, it generates a default config and starts the daemon.  Now we need to open a shell session on the docker container to make some config changes.  You should have a root ‘sh’ session.</p>

<h3 id="ipfs-config" id="ipfs-config">IPFS Config</h3>

<p>In the shell session, run the following:</p>

<p># ipfs config —json Experimental.FilestoreEnabled true</p>

<p>This enables the filestore feature, which allows us to share files without being broken up into chunks on the filesystem (duplicating the data).  More information is <a href="https://github.com/ipfs/kubo/blob/master/docs/experimental-features.md#ipfs-filestore">here</a>.</p>

<p>Restart the node.</p>

<h3 id="add-tsl-sh" id="add-tsl-sh">add-tsl.sh</h3>

<p>Into the mapped /data/ipfs directory (wherever you’re mapping it from, put the file there, and it’ll appear in the container), put the file <a href="https://github.com/ejstacey/ipfs-tsl-tools/blob/main/add-tsl.sh">add-tsl.sh</a>. Check the file source to be sure, it is set up like you want (you may want to alter the filename, or the mounted TSL directory).</p>

<p>This script will go through TSL directory and add the files to the local ipfs store.  IPFS has the flat filestore, but it also allows building a directory structure in something it calls the Mutable File System. It creates directories and uploads the files into the directory, so the layout matches the original TSL.  It also uses the actual files to do the add instead of breaking files up and storing their chunks (duplicates the data).  This is why FilestoreEnabled had to be set earlier.</p>

<p>Start another shell session on the node.  Run the following to test:</p>

<pre><code># ipfs cat /ipfs/QmYwAPJzv5CZsnA625s3Xf2nemtYgPpHdWEz79ojWnPbdG/readme
</code></pre>

<p>This downloads a file from IPFS (a readme file)</p>

<p>If that’s working, start the upload.  On my node it takes a couple hours to  upload everything to ipfs.</p>

<pre><code># cd /data/ipfs/; ./add-tsl.sh
</code></pre>

<p>This will run and add the tsl libraries.  Watch the file hash-list.  You should see entries like:</p>

<pre><code>Ashita ga Aru sa:Ashita ga Aru sa E01.avi:QmWRwP4pdCrHrtysDj65ANT5kNupsddBBJrH8j3vZhKU8w
Ashita ga Aru sa:Ashita ga Aru sa E02.avi:QmRWhbcjqKbwPyiPiMbHcP8v5BYK8jMRwGm8N6cCp2G7AL
Ashita ga Aru sa:Ashita ga Aru sa E03.avi:QmWiRu95VBKmPd4BM7Fut4hcA1opJ9ymwhNZXm3UNbBo1R
Ashita ga Aru sa:Ashita ga Aru sa E04.avi:QmdMLtW7Nk8sCviTKRbyUcdtfZDcH7RGfsS6SuWE8hd2Pz
Ashita ga Aru sa:Ashita ga Aru sa E05.avi:QmXhHKM5s2xFFSRYgK9cB62qqCTAF7GntmW7oosKMw6zoW
Ashita ga Aru sa:Ashita ga Aru sa E06.avi:Qmd4K51kbaX7B2gzt8RKpsnZGRzxdtg6hyAKEfKgQ9ogk4
Ashita ga Aru sa:Ashita ga Aru sa E07.avi:QmVte4fZ5rhVdNFC25ZicNhzZWV5Hd58apyMP6DpwmMrbJ
Ashita ga Aru sa:Ashita ga Aru sa E08.mp4:QmbFq7w4kLTfLdTPZkzDxa7YEvEi7HaetKZEnGVf9YtuRY
Ashita ga Aru sa:Ashita ga Aru sa E09.mkv:QmSYx5kYPtvHc6buBmyEbkjgwgJrJtJ7WzB7Nvd8Kq7xRP
Ashita ga Aru sa:Ashita ga Aru sa E10.mp4:QmbwrxFK7v1kSqzLvaCpESdAEXJqBy8b3Wyp5JqW6wk18T
</code></pre>

<p>This shows it’s working correctly. If you’re getting blank entries for the third field (fields are split by :), cancel the script, remove the hash-file, and restart the script.</p>

<pre><code>Errors will possibly pop up in the window the shell script is running that say:Error: to-files: cannot put node in path &#34;/The Silent Library/Documental/Documentary of Documental/Season 1/Translator Notes/Documentary of Documental S01E02 TN.txt&#34;: directory already has entry by that name                                    
</code></pre>

<pre><code>Error: to-files: cannot put node in path &#34;/The Silent Library/Documental/Documentary of Documental/Season 2/Translator Notes/Documentary of documental S02E04 TN.txt&#34;: directory already has entry by that name                                    
</code></pre>

<pre><code>Error: to-files: cannot put node in path &#34;/The Silent Library/Documental/Documentary of Documental/Season 2/Translator Notes/Documentary of documental S02E04 TN.txt&#34;: directory already has entry by that name 
</code></pre>

<p>This means it’s identified it’s already been set up with the MFS.  This is fine.</p>

<p>The script can be run multiple times.  If it has already uploaded that file it’ll just report the hash of the file.</p>

<p>When this finishes, you’re in the online cluster serving these files!</p>

<p><strong>(Optional) setting up a gateway</strong></p>

<p>Gateways can be used from anywhere (unless you put limits on it) to access any files on IPFS.  Because of this, existing gateways can be used to access to the TSL files.  If you would like to set up your own, though, it’s pretty easy.</p>

<p>In a normal browser, access port 8080 (or if in docker, the equivalent mapped port) and access /ipfs/ QmPZ9gcCEpqKTo6aq61g2nXGUhM4iCL3ewB6LDXZCtioEB.  This is the cid of the readme file retrieved earlier.  Going to /ipfs/QmW7FFR7kJ6TraVU3G9MS12N6iUgr1gmyESYEYraArjogA will show you the root directory of the TSL collection.  From there users can access any of the current TSL files and download them, all over HTTP.</p>

<p>My gateway (and a link to the TSL is): <a href="https://ipfs.joyrex.net/ipfs/QmW7FFR7kJ6TraVU3G9MS12N6iUgr1gmyESYEYraArjogA/">https://ipfs.joyrex.net/ipfs/QmW7FFR7kJ6TraVU3G9MS12N6iUgr1gmyESYEYraArjogA</a></p>

<p>This can also be accessed from <a href="https://tsl.joyrex.net/">https://tsl.joyrex.net/</a></p>

<h3 id="optional-playing-with-the-rpc-web-interface" id="optional-playing-with-the-rpc-web-interface">(Optional) playing with the RPC web interface</h3>

<p>The 5001 port (which you should NEVER publicly make available) has a web interface available at /ipfs/.  For example, connect in your web browser to  localhost:5001/ipfs/ to access it (or if you’re in docker, whatever you mapped the port to).  If you’ve run add-tsl.sh, all your files should show up in the Files area.</p>

<h3 id="random-thoughts" id="random-thoughts">Random Thoughts</h3>

<p>This method, while a bit convoluted, allows us to share without having to re-upload to each other.  Syncthing is used to distribute once.  Then we run the add-tsl.sh script (or some form of it) every so often to add any changes to our shared files (probably a cron with a find -ctime to get the latest changes).</p>

<p>I don’t know how it handles removing files yet. Well, I do know: it doesn’t. I still will have to implement something that removes files that exist in the MFS store but don’t exist in the library anymore.</p>

<p>Before I realised that the gateway instance had a great MFS browser built in, I wrote (copied) some shitty code from an example site to capture and display the links my own way.  I also import the files and their associated codes into a DB table so I can refer against them in the future if needed.  All that code exists in a repo I put up with my scripts: <a href="https://github.com/ejstacey/ipfs-tsl-tools">ejstacey/ipfs-tsl-tools (github.com)</a></p>

<p>Q&amp;A:</p>

<p>Q: What about IPFS clusters (<a href="https://ipfscluster.io/">IPFS Cluster – Pinset orchestration for IPFS</a>)?</p>

<p>A: This could still be an option, however I think it might conflict with us each only using syncthing to get a copy of the new stuff once.  It’s more built for one node updating a file and it going out to the others over the IPFS network, which we avoid.</p>

<p>Q: Why doesn’t this replace syncthing or the torrent or something else?</p>

<p>A: Because everything used right now fits for its purpose. Syncthing is great because bipedal controls everything. We want him to be the curator. The torrent file is great because it allows people to get everything (up to a certain date) in one large collection using a popular format. IPFS isn’t good for either of those, but it IS handy for one-off grabs.</p>

<p>Q: Is speed going to be an issue?</p>

<p>A: Possibly.  As it’s peer-to-peer, people who are using an IPFS client will share some or all of the files they’ve grabbed (IPFS has garbage collection that desides when to stop sharing a file. the person can pin it to their local instance to make garbage collection ignore the file).  People who are using a gateway server don’t share back.  If there’s a large group of “constant seeders” like on the torrent file, it should go quick.  I know quite a few people who do syncthing also started seeding to the torrent file with their existing data, getting more seeders more quickly.  Maybe we could do something like that here if people are willing.  There’s also the possibility that if we use a gateway with a huge upload (like Cloudflare’s), it’ll pin the file locally on cloudflare’s instance, so for future people it’d be quick.  That last part is just a theory though.</p>

<p>Q: Why?</p>

<p>A: It seemed like a good project for me to play with, and it could help the community.  Mega is great, but does have limitations and costs.  I thought it could be good to have a backup, just in case.  Also bipedal has started thinking more about the TSL Raws wing, which is larger than the standard TSL (~3.3TB and growing vs 1.5TB), and it may end up that those files can’t live on mega with the current plan(s), so needs another way for timers/subbers/etc to get the files without jumping on the entire syncthing (or whatever).</p>

<p>If you want to know more, there’s a ton of doco out there on ipfs.  If there are questions find me at @ejstacey on discord, or @ejstacey.joyrex.net on bluesky, or <a href="/@/ejstacey@kolektiva.social" class="u-url mention">@<span>ejstacey@kolektiva.social</span></a> on mastodon/activitypub.</p>
]]></content:encoded>
      <guid>https://blog.joyrex.net/the-silent-library-on-ipfs</guid>
      <pubDate>Sun, 02 Jul 2023 04:28:03 +0000</pubDate>
    </item>
    <item>
      <title>Vote NO to Deakin’s Dodgy Deal</title>
      <link>https://blog.joyrex.net/vote-no-to-deakins-dodgy-deal?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[NB: I wrote this during the 4-hour stop-work NTEU is partaking in today.&#xA;&#xA;Deakin has put its version of a “fair” EA to staff, which is up for a vote from 24th -  28th of April. It’s typical Deakin management. Embarrassingly low offers of pay rises, painfully low rate of casualisation conversion, and... that’s about it. Nothing about working from home rights for staff. A half-hearted “oh yeah we’ll look at workloads” comment, and some First Nations advancement stuff (which I have no comment on because I haven’t spoken to anyone across that yet).&#xA;&#xA;Deakin’s branch of the NTEU has been posting facts. They’re working to combat the half-truths, twisted realities, and missing information Deakin tries to disseminate around their terrible deal. Here’s some of the Deakin NTEU information pointing out why you should vote NO:&#xA;&#xA;Why Vote No? You deserve better! - (nteu-deakin.org)&#xA;&#xA;How does Deakin’s pay offer compare to the sector? - (nteu-deakin.org)&#xA;&#xA;There’s more at:&#xA;&#xA;Home - (nteu-deakin.org)&#xA;&#xA;Deakin NTEU Branch (@DeakinNteu) / Twitter&#xA;&#xA;NTEU Deakin University Branch | Facebook&#xA;&#xA;!--more--&#xA;&#xA;Those are all great resources to stay up to date on the current information. In this post I’d like to focus more on Deakin’s (semi-recent) history and why their rationalisations for not helping its staff are not true. I’d also like to review how they’ve treated staff in the last few years, during the multiple (and still ongoing) restructures Deakin is doing to its staff while management constantly shifts around the catastrophic fires they started. This post will have some numbers in it, but it’s more about learning how they really feel about the staff, and how they’ve blatantly misled us many times.&#xA;&#xA;How we got here…&#xA;&#xA;To start, here’s a rundown of the restructures during the COVID period. I am not saying COVID caused these restructures, as by all accounts they were planned well before that. They were, however, a convenient excuse to let Iain and Deakin Council run roughshod through the staff, hacking and slashing however they wanted.&#xA;&#xA;March 2020&#xA;&#xA;COVID lockdowns start. Deakin (and everyone else) must rush to have the workforce working from home. Thanks to the hard work of the staff, the migration to working from home wasn’t as bad as some other universities and companies.&#xA;&#xA;May 2020&#xA;&#xA;Iain has a town hall announcing the first round of redundancies. He cries poor and gives the worst-case predictions of loss of students. He conveniently ignores the university has $500+ million in a “future fund”, as well as over a billion in easily sellable non-building assets. That doesn’t matter; 400+ full-time staff must lose their jobs, as well hundreds and hundreds of casuals and sessional staff. Voluntary redundancies are not offered. They have a hit list and they go through it, cutting out people via restructure/major workplace change (MWC).&#xA;&#xA;June 2020&#xA;&#xA;Deakin attempts to give the minimum amount of consultation (2 weeks) on the changes, and takes other actions that go against the EA. The NTEU takes them to the Fair Work Commission and wins. Deakin begrudgingly extends consultation and makes some other changes to fall in line with the FWC decision.&#xA;&#xA;March 2021&#xA;&#xA;Deakin posts their financial report including (surprise surprise), making a profit on the backs of all the staff they fired (and the rest they’ve overworked).&#xA;&#xA;August 2021&#xA;&#xA;Deakin comes out with Phase 2 (“Deakin Reimagined” aka “Deakin Disimagined”). Even more redundancies. Major workplace changes across the board. Already low staff morale goes even lower. Deakin expects even more of the staff as they cause chaos and stress amongst everyone. I believe in the town hall presentation announcing this phase, Iain lies and says that this will happen and then no more restructures after, but I am not sure and will not listen to him talk for an hour to get one snippet. Deakin pretends to take ideas in their Ideas Hub but by all accounts, ignores all input and does whatever they wanted with areas. The result is multiple thousands of peoples’ lives thrown into stress and chaos, and near 1000 (I think?) more job cuts, easily double the first phase. Deakin did this to you, with no remorse and no support (no, the employee wellbeing service with their booked-out-for-months psychologists doesn’t count as support).&#xA;&#xA;March 2022&#xA;&#xA;Deakin once again posts a profit in their financial report. Iain is congratulated for his actions by people who are more worried about money than people.&#xA;&#xA;Now&#xA;&#xA;Deakin is struggling. Staff are overworked, stressed, and morale is terrible. Some areas are on their third or fourth restructure in as many years, as management seems to just be making it up as they go along. Middle management bullying is rife and university leadership appears to have abdicated themselves of any responsibility. Students are starting to see the cracks as staff are working WELL beyond what they’re supposed to, because they care for the students (and Deakin knows this).&#xA;&#xA;Deakin chooses this moment to try and give staff a shoddy EA, dooming them to another three years of being underpaid and overworked.&#xA;&#xA;May 2022&#xA;&#xA;Deakin will post their 2022 annual report. Any bets that they’ll be posting profits again?&#xA;&#xA;Money, money.&#xA;&#xA;That timeline focuses a lot on what Deakin claims vs how Deakin is doing. Let’s dig in, looking at their claims before both times they announced they were firing hundreds of people and causing chaos for thousands of others.&#xA;&#xA;May 2020 - Models for future expenses&#xA;&#xA;From Transcript of VC’s 25 May staff briefing (sharepoint.com):&#xA;&#xA;“To put this in context, we will be spending more than we earn in 2020, 2021 and even with an optimistic recovery pathway some of 2022, and we will not return to an operating surplus until 2023.”&#xA;&#xA;November 2020 – More Models&#xA;&#xA;From Video of the All Staff Briefing – 20 November 2020 (sharepoint.com) (around the 15 minute mark):&#xA;&#xA;“For 2020 \[…\] leaving us with a net deficit of 69 million and an underlying surpl—underlying deficit I should say—of 36.4 million.”&#xA;&#xA;“The budget we took to council for next year indicates that \[…\] a gap in there of around 90 million dollars \[deficit\]).”&#xA;&#xA;“But what I can assure you is we will continue to do what we have done in the past, which is to be open and honest as we can with the university community…”&#xA;&#xA;March 2021 – Actual Results and 2021 FY predictions!&#xA;&#xA;From Video of the all staff briefing – 3 March (sharepoint.com) (around the 8 minute mark):&#xA;&#xA;“Now I know that when I spoke to you in the third semester of last year that we would end the year in a deficit…”&#xA;&#xA;Wow, a profit (they choose to use ‘surplus’ even though they’ve turned the university into a business)! Who could have guessed! Certainly not all the models Deakin uses to justify not paying staff (and even firing heaps)!&#xA;&#xA;But wait, he makes more predictions for the future! Let’s see how those turn out…&#xA;&#xA;June 2022 – Actual Results of FY2021&#xA;&#xA;From Video of the VC&#39;s all-staff briefing – 23 June 2022 (sharepoint.com) (around 7 minutes in):&#xA;&#xA;“So.. we had a slightly increased income compared to where we were in our budget position…”&#xA;&#xA;“The overall result for the year was a total surplus of 80 million dollars…”&#xA;&#xA;“\[The underlying surplus\] was 21 million dollars. That is a good and solid position to be in for the university.”&#xA;&#xA;Once again, they made a profit.&#xA;&#xA;He also, once again, predicts the future:&#xA;&#xA;That’s 2022, which are the results that haven’t been publicly released. Of course, they used very similar tables in the argument for why they can’t pay people more in this round of bargaining and why they’re offering sub-standard rates.&#xA;&#xA;Summing up&#xA;&#xA;This cycle of making up terrible numbers, using that to decimate the staff, and then coming out in profit is their playbook. It’s how they’ve done everything the last few years, and it’s how they’ll continue to do things, by all accounts. We need to break this cycle, because they definitely won’t.&#xA;&#xA; &#xA;&#xA;“We’re all in this together.”&#xA;&#xA;It was hilarious when Iain said that during the restructures. This is guy on around a million dollars a year (when you add in all his bonuses/entitlements), and a senior exec that are all on $200k+/yr (some above $500k/yr), and all refused to take salary cuts when Deakin’s financials were (allegedly) so dire (Iain forewent some bonuses in 2020, but that was it). This is a council that decided their $500+ million (I believe it’s closer to a one billion dollars now, if not over) surplus nest egg shouldn’t be used AT ALL, but instead sit there, growing, while Deakin staff suffer. This is the exec that decided casual staff should lose access to the library and email when there isn’t a teaching period they’re hired on for. In fact, Deakin is currently in Fair Work because they were caught ripping off casuals for years and years (like many other universities). As far as I know, the woman who sends out all the emails about how Deakin is trying to help the staff during these negotiations, hasn’t even shown up to one bargaining meeting. Does any of this say Deakin cares about and respects its staff?&#xA;&#xA;The union saw many, many ideas/feedback letters that were sent to Deakin around their two rounds of restructures. Big surprise: Deakin acted on extremely few of them. This was advice coming from people on the ground, in technical or management roles, who had years of experience in their areas, and Deakin ignored them. Generally, what would come in is more upper management to manage these people, because if there’s one thing that’s needed, it’s more upper management. Because of Deakin ignoring staff input, we lost some great staff, both through redundancies and through wrecked Deakin culture post-restructures (which are still happening).&#xA;&#xA;Everything old is new again&#xA;&#xA;One thing that really gets me is that after these years of lying, scrambling, obfuscating, is that they have the nerve to try it again when it comes to this vote. I guess we can’t blame them, they’ve all getting richer on the backs of a broken staff (those that are still around). Hell, Iain got re-hired for another few years while various areas in the university are so toxic that people are taking stress leave to avoid it. Other areas are so dysfunctional that people are being told it could be a 6+ month wait for help with stuff. This is what Deakin Council and Exec did. I guess as long as they keep denying there’s systemic issues, they can keep milking this cow. It’s worked before, why wouldn’t they try it again?&#xA;&#xA;It’s up to you to stop them. Not as “retribution” for the horrible way they treated their staff during restructures (and continue to treat them, really). You should stop them because there is no good reason they can’t come to the bargaining table and offer more, besides greed. I believe I’ve established all their models (or at least the ones they feel like sharing with us) are if not intentionally misleading, very poor. They have no good excuses for why they’re offering so little, besides the unspoken one (they think the staff don’t deserve it). That’s ridiculous.&#xA;&#xA;The scare-mongering coming from the university emails are offensive as well. They send half-completed threats, saying there will be NO pay raises or extra holidays if the staff vote down this EA. They forget to mention “until a new EA is signed, and at other universities new EAs have been signed a month or two after the non-union ballot is struck down”. Our president, Dr. Piper Rodd, has correctly said multiple times that the uni could give us pay rises, give us days off, whenever they wanted. They don’t though, because they don’t actually care about the staff’s wellbeing. They’ve sent various emails threatening staff who are even \thinking\ about taking industrial action, which is not allowed, of course. You can’t be persecuted for thought crimes, although I’m sure they’d love that. It&#39;s the same thing every email/presentation: claims of transparency, a reality of half-truths and obfuscations designed to confuse or exhaust you so you just give in.&#xA;&#xA; &#xA;&#xA;I’ll finish up by saying: regardless of your opinion of the NTEU, Deakin is pushing a terrible deal and expecting you to thank them for it. These are people out of touch with the Deakin community and culture trying to dictate (once again) from the top down how things should be. While this move technically got through before during the restructures, it’s practically failed. This vote should likewise fail. Vote NO. Make Deakin come back to the bargaining table in good faith.]]&gt;</description>
      <content:encoded><![CDATA[<h6 id="nb-i-wrote-this-during-the-4-hour-stop-work-nteu-is-partaking-in-today" id="nb-i-wrote-this-during-the-4-hour-stop-work-nteu-is-partaking-in-today">NB: I wrote this during the 4-hour stop-work NTEU is partaking in today.</h6>

<p>Deakin has put its version of a “fair” EA to staff, which is up for a vote from 24th –&gt; 28th of April. It’s typical Deakin management. Embarrassingly low offers of pay rises, painfully low rate of casualisation conversion, and... that’s about it. Nothing about working from home rights for staff. A half-hearted “oh yeah we’ll look at workloads” comment, and some First Nations advancement stuff (which I have no comment on because I haven’t spoken to anyone across that yet).</p>

<p>Deakin’s branch of the NTEU has been posting facts. They’re working to combat the half-truths, twisted realities, and missing information Deakin tries to disseminate around their terrible deal. Here’s some of the Deakin NTEU information pointing out why you should vote NO:</p>

<p><a href="https://www.nteu-deakin.org/why-vote-no/">Why Vote No? You deserve better! – (nteu-deakin.org)</a></p>

<p><a href="https://www.nteu-deakin.org/how-much-worse-off-will-you-be-on-deakins-agreement/">How does Deakin’s pay offer compare to the sector? – (nteu-deakin.org)</a></p>

<p>There’s more at:</p>

<p><a href="https://www.nteu-deakin.org/">Home – (nteu-deakin.org)</a></p>

<p><a href="https://twitter.com/deakinnteu">Deakin NTEU Branch (@DeakinNteu) / Twitter</a></p>

<p><a href="https://www.facebook.com/nteudeakinbranch">NTEU Deakin University Branch | Facebook</a></p>



<p>Those are all great resources to stay up to date on the current information. In this post I’d like to focus more on Deakin’s (semi-recent) history and why their rationalisations for not helping its staff are not true. I’d also like to review how they’ve treated staff in the last few years, during the multiple (and still ongoing) restructures Deakin is doing to its staff while management constantly shifts around the catastrophic fires they started. This post will have some numbers in it, but it’s more about learning how they really feel about the staff, and how they’ve blatantly misled us many times.</p>

<h2 id="how-we-got-here" id="how-we-got-here">How we got here…</h2>

<p>To start, here’s a rundown of the restructures during the COVID period. I am not saying COVID caused these restructures, as by all accounts they were planned well before that. They were, however, a convenient excuse to let Iain and Deakin Council run roughshod through the staff, hacking and slashing however they wanted.</p>

<h4 id="march-2020" id="march-2020">March 2020</h4>

<p>COVID lockdowns start. Deakin (and everyone else) must rush to have the workforce working from home. Thanks to the hard work of the staff, the migration to working from home wasn’t as bad as some other universities and companies.</p>

<h4 id="may-2020" id="may-2020">May 2020</h4>

<p>Iain has a town hall announcing the first round of redundancies. He cries poor and gives the worst-case predictions of loss of students. He conveniently ignores the university has $500+ million in a “future fund”, as well as over a billion in easily sellable non-building assets. That doesn’t matter; 400+ full-time staff must lose their jobs, as well hundreds and hundreds of casuals and sessional staff. Voluntary redundancies are not offered. They have a hit list and they go through it, cutting out people via restructure/major workplace change (MWC).</p>

<h4 id="june-2020" id="june-2020">June 2020</h4>

<p>Deakin attempts to give the minimum amount of consultation (2 weeks) on the changes, and takes other actions that go against the EA. The NTEU takes them to the Fair Work Commission and wins. Deakin begrudgingly extends consultation and makes some other changes to fall in line with the FWC decision.</p>

<h4 id="march-2021" id="march-2021">March 2021</h4>

<p>Deakin posts their financial report including (surprise surprise), making a profit on the backs of all the staff they fired (and the rest they’ve overworked).</p>

<h4 id="august-2021" id="august-2021">August 2021</h4>

<p>Deakin comes out with Phase 2 (“Deakin Reimagined” aka “Deakin Disimagined”). Even more redundancies. Major workplace changes across the board. Already low staff morale goes even lower. Deakin expects even more of the staff as they cause chaos and stress amongst everyone. I believe in the town hall presentation announcing this phase, Iain lies and says that this will happen and then no more restructures after, but I am not sure and will not listen to him talk for an hour to get one snippet. Deakin pretends to take ideas in their Ideas Hub but by all accounts, ignores all input and does whatever they wanted with areas. The result is multiple thousands of peoples’ lives thrown into stress and chaos, and near 1000 (I think?) more job cuts, easily double the first phase. Deakin did this to you, with no remorse and no support (no, the employee wellbeing service with their booked-out-for-months psychologists doesn’t count as support).</p>

<h4 id="march-2022" id="march-2022">March 2022</h4>

<p>Deakin once again posts a profit in their financial report. Iain is congratulated for his actions by people who are more worried about money than people.</p>

<h4 id="now" id="now">Now</h4>

<p>Deakin is struggling. Staff are overworked, stressed, and morale is terrible. Some areas are on their third or fourth restructure in as many years, as management seems to just be making it up as they go along. Middle management bullying is rife and university leadership appears to have abdicated themselves of any responsibility. Students are starting to see the cracks as staff are working WELL beyond what they’re supposed to, because they care for the students (and Deakin knows this).</p>

<p>Deakin chooses this moment to try and give staff a shoddy EA, dooming them to another three years of being underpaid and overworked.</p>

<h4 id="may-2022" id="may-2022">May 2022</h4>

<p>Deakin will post their 2022 annual report. Any bets that they’ll be posting profits again?</p>

<h2 id="money-money" id="money-money">Money, money.</h2>

<p>That timeline focuses a lot on what Deakin claims vs how Deakin is doing. Let’s dig in, looking at their claims before both times they announced they were firing hundreds of people and causing chaos for thousands of others.</p>

<h4 id="may-2020-models-for-future-expenses" id="may-2020-models-for-future-expenses">May 2020 – Models for future expenses</h4>

<p>From <a href="https://deakin365.sharepoint.com/sites/Network/SitePages/Transcript-of-VC%E2%80%99s-25-May-staff-briefing.aspx">Transcript of VC’s 25 May staff briefing (sharepoint.com)</a>:</p>

<p>“To put this in context, we will be spending more than we earn in 2020, 2021 and even with an optimistic recovery pathway some of 2022, and we will not return to an operating surplus until 2023.”</p>

<h4 id="november-2020-more-models" id="november-2020-more-models">November 2020 – More Models</h4>

<p>From <a href="https://deakin365.sharepoint.com/sites/Network/SitePages/Video-of-All-Staff-Briefing-%E2%80%93-20-November-2020.aspx">Video of the All Staff Briefing – 20 November 2020 (sharepoint.com)</a> (around the 15 minute mark):</p>

<p><img src="https://i.snap.as/KcFAEWV0.png" alt=""/></p>

<p>“For 2020 […] leaving us with a net deficit of 69 million and an underlying surpl—underlying deficit I should say—of 36.4 million.”</p>

<p>“The budget we took to council for next year indicates that […] a gap in there of around 90 million dollars [deficit]).”</p>

<p>“But what I can assure you is we will continue to do what we have done in the past, which is to be open and honest as we can with the university community…”</p>

<h4 id="march-2021-actual-results-and-2021-fy-predictions" id="march-2021-actual-results-and-2021-fy-predictions">March 2021 – Actual Results and 2021 FY predictions!</h4>

<p>From <a href="https://deakin365.sharepoint.com/sites/Network/SitePages/Video-of-the-all-staff-briefing-%E2%80%93-3-March.aspx">Video of the all staff briefing – 3 March (sharepoint.com)</a> (around the 8 minute mark):</p>

<p><img src="https://i.snap.as/1xs1uyGs.png" alt=""/></p>

<p>“Now I know that when I spoke to you in the third semester of last year that we would end the year in a deficit…”</p>

<p>Wow, a profit (they choose to use ‘surplus’ even though they’ve turned the university into a business)! Who could have guessed! Certainly not all the models Deakin uses to justify not paying staff (and even firing heaps)!</p>

<p>But wait, he makes more predictions for the future! Let’s see how those turn out…</p>

<p><img src="https://i.snap.as/hgP1SZIP.png" alt=""/></p>

<p><strong>June 2022 – Actual Results of FY2021</strong></p>

<p>From <a href="https://deakin365.sharepoint.com/sites/Network/SitePages/Video-of-the-VC&#39;s-all-staff-briefing-%E2%80%93-23-June-2022.aspx">Video of the VC&#39;s all-staff briefing – 23 June 2022 (sharepoint.com)</a> (around 7 minutes in):</p>

<p><img src="https://i.snap.as/Iyw2ZhHo.png" alt=""/></p>

<p>“So.. we had a slightly increased income compared to where we were in our budget position…”</p>

<p>“The overall result for the year was a total surplus of 80 million dollars…”</p>

<p>“[The underlying surplus] was 21 million dollars. That is a good and solid position to be in for the university.”</p>

<p>Once again, they made a profit.</p>

<p>He also, once again, predicts the future:</p>

<p><img src="https://i.snap.as/OdmfVe1Z.png" alt=""/></p>

<p>That’s 2022, which are the results that haven’t been publicly released. Of course, they used very similar tables in the argument for why they can’t pay people more in this round of bargaining and why they’re offering sub-standard rates.</p>

<h4 id="summing-up" id="summing-up">Summing up</h4>

<p>This cycle of making up terrible numbers, using that to decimate the staff, and then coming out in profit is their playbook. It’s how they’ve done everything the last few years, and it’s how they’ll continue to do things, by all accounts. We need to break this cycle, because they definitely won’t.</p>

<p> </p>

<h2 id="we-re-all-in-this-together" id="we-re-all-in-this-together">“We’re all in this together.”</h2>

<p>It was hilarious when Iain said that during the restructures. This is guy on around a million dollars a year (when you add in all his bonuses/entitlements), and a senior exec that are all on $200k+/yr (some above $500k/yr), and <strong>all refused</strong> to take salary cuts when Deakin’s financials were (allegedly) so dire (Iain forewent some bonuses in 2020, but that was it). This is a council that decided their $500+ million (I believe it’s closer to a one billion dollars now, if not over) surplus nest egg shouldn’t be used <strong>AT ALL</strong>, but instead sit there, growing, while Deakin staff suffer. This is the exec that decided casual staff should lose access to the library and email when there isn’t a teaching period they’re hired on for. In fact, Deakin is currently in Fair Work because they were caught ripping off casuals for years and years (like many other universities). As far as I know, the woman who sends out all the emails about how Deakin is trying to help the staff during these negotiations, hasn’t even shown up to one bargaining meeting. <strong>Does any of this say Deakin cares about and respects its staff?</strong></p>

<p>The union saw many, many ideas/feedback letters that were sent to Deakin around their two rounds of restructures. Big surprise: Deakin acted on extremely few of them. This was advice coming from people on the ground, in technical or management roles, who had years of experience in their areas, and Deakin ignored them. Generally, what would come in is more upper management to manage these people, because if there’s one thing that’s needed, it’s more upper management. Because of Deakin ignoring staff input, we lost some great staff, both through redundancies and through wrecked Deakin culture post-restructures (which are still happening).</p>

<h2 id="everything-old-is-new-again" id="everything-old-is-new-again">Everything old is new again</h2>

<p>One thing that really gets me is that after these years of lying, scrambling, obfuscating, is that they have the nerve to try it again when it comes to this vote. I guess we can’t blame them, they’ve all getting richer on the backs of a broken staff (those that are still around). Hell, Iain got re-hired for another few years while various areas in the university are so toxic that people are taking stress leave to avoid it. Other areas are so dysfunctional that people are being told it could be a 6+ month wait for help with stuff. This is what Deakin Council and Exec did. I guess as long as they keep denying there’s systemic issues, they can keep milking this cow. It’s worked before, why wouldn’t they try it again?</p>

<p><strong>It’s up to you to stop them.</strong> Not as “retribution” for the horrible way they treated their staff during restructures (and continue to treat them, really). You should stop them because there is <strong>no good reason</strong> they can’t come to the bargaining table and offer more, besides greed. I believe I’ve established all their models (or at least the ones they feel like sharing with us) are if not intentionally misleading, very poor. They have no good excuses for why they’re offering so little, besides the unspoken one (they think the staff don’t deserve it). That’s ridiculous.</p>

<p>The scare-mongering coming from the university emails are offensive as well. They send half-completed threats, saying there will be NO pay raises or extra holidays if the staff vote down this EA. They forget to mention “until a new EA is signed, and at other universities new EAs have been signed a month or two after the non-union ballot is struck down”. Our president, Dr. Piper Rodd, has correctly said multiple times that the uni could give us pay rises, give us days off, whenever they wanted. They don’t though, because they don’t actually care about the staff’s wellbeing. They’ve sent various emails threatening staff who are even *thinking* about taking industrial action, which is not allowed, of course. You can’t be persecuted for thought crimes, although I’m sure they’d love that. It&#39;s the same thing every email/presentation: claims of transparency, a reality of half-truths and obfuscations designed to confuse or exhaust you so you just give in.</p>

<p> </p>

<p>I’ll finish up by saying: regardless of your opinion of the NTEU, <strong>Deakin is pushing a terrible deal</strong> and expecting you to thank them for it. These are people out of touch with the Deakin community and culture trying to dictate (once again) from the top down how things should be. While this move technically got through before during the restructures, it’s practically failed. This vote should likewise fail. <strong>Vote NO</strong>. Make Deakin come back to the bargaining table in good faith.</p>
]]></content:encoded>
      <guid>https://blog.joyrex.net/vote-no-to-deakins-dodgy-deal</guid>
      <pubDate>Fri, 21 Apr 2023 02:30:45 +0000</pubDate>
    </item>
    <item>
      <title>Voting YES For PABO at Deakin</title>
      <link>https://blog.joyrex.net/voting-yes-for-pabo-at-deakin?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[Deakin NTEU Members will soon receive a ballot that allows them to vote on if Deakin NTEU Members should take protected action, and what protected action they should be allowed to take. This is called a PABO, or a Protected Action Ballot Order. I thought I’d write a post on why we need to strongly support this, and what it means.&#xA;&#xA;!--more--&#xA;&#xA;The Process&#xA;&#xA;This is based on what I’ve observed and learned. It may not be 100% correct. I am not a lawyer or Industrial Officer.&#xA;&#xA;Thanks to corporate influence on our political system, and the wave of neoliberalism we’ve been living in for the last 40-50 years, industrial laws have declined so far in this country that unions are not allowed to strike unless it’s a specific time (EA bargaining) and after jumping through various annoying hoops (described below). In addition, unions can’t have solidarity strikes anymore, further weakening worker power. Finally, Fair Work has made it so that unions can only take actions that they have specifically told them about, which Fair Work will look at and possibly reject. Breaking any of those rules leads to the unions getting tens of thousands of dollars in fines, and possibly some for the individuals as well. Because of this, we have PABOs. It’s worth repeating that this was deliberately done by the major parties in concert with business. The politicians and the businesses are all doing great while the workers (and non-workers)… not so much.&#xA;&#xA;To start the process, unions and the university must make concerted efforts to genuinely negotiate. The NTEU has done this, but Deakin has stonewalled and given scraps, ignoring the bigger issues. You can see this and this blog post for more details on that.&#xA;&#xA;To be allowed to strike or do any sort of industrial action, we must first have a member meeting where over 50% of the attending members agree to have the union lodge with Fair Work saying it wants to run a protected action ballot. The NTEU then lodges with Fair Work to hold the Protected Action Ballot. Fair Work generally agrees with this if the members voted for it. This is the point we are at now, as of 20/03/2023. Assuming Fair Work approves the ballot request, they hand over control of the election to the AEC, who runs the Protected Action Ballot process.&#xA;&#xA;When the ballot starts, all members will be emailed with a link to an online voting form. They then have a set amount of time (two weeks, I believe) to vote. This allows the members to vote on each individual industrial action listed in the application. It’s important to note that this vote is for allowing the union to possibly take the action. It is not signing you up personally to take that specific action. The union is aware not everyone is able to (or feels comfortable) taking every action, but the more actions we get voted up, the more weapons we’ll have in our arsenal when it comes to acting.&#xA;&#xA;For a PABO action to be successful, 50% of the union members need to vote “YES” for action on the voting form. Also 50% of the members need to vote, overall.&#xA;&#xA;I believe the university could then lodge with Fair Work saying that industrial action will cause undue stress on the company, etc, etc. Normal corporate bullshit.&#xA;&#xA;If the PABO actions are successful, they’re available for the union to use during this bargaining round.&#xA;&#xA;The Actions&#xA;&#xA;The actions start from the very small (putting notes into email signatures) to the very large (fully striking). There are many options in-between there. Every approved PABO action means that you can freely take it without the university being able to take adverse action against you for it. If you strike, they are legally allowed to not pay you for the time you were striking, but they can’t penalise you for striking. If you stop taking library late fees as part of an industrial action, you can’t be punished for that. This is our opportunity.&#xA;&#xA;No members are required to take any of the approved actions themselves, but it’s important to note that the more people we have taking each action, the stronger we are. That strength will be apparent to the university and our fellow staff. Also, the stronger we are from the start, the faster we can hopefully resolve the bargaining, and with better results for everyone.&#xA;&#xA;Some of the actions may impact students. You will see them on the ballot. I understand people that have reservations about impacting the (already underserved due to Deakin management) students, and I sympathise with it. My view is that we’re trying to stop an even bigger problem within Deakin, and the temporary measures will lead to greater long-term experience for both the staff and students. Even if you are personally against actions on the list, I still encourage you to vote YES. Others in the university may be able to be in a better position to do it, and discouraging some methods tells management we have a line we won’t cross. If management from the last 3-4 years is any indication, they will use that to their advantage and exploit us. They have previously, they’re doing it now, and they’ll continue to do it if we let them.&#xA;&#xA;This is the time we can make our views known and not suffer a penalty. Deakin management has spent years abusing us (more below), and it’s our time to make sure they cannot take advantage of us in the future. With Iain being hired for another five years, it seems they have more disruption on the table. This is our chance to get provisions put into the new EA. One of the big ones related to this is “staff can only go through one restructure (major workplace change) during each EA period”. Deakin management hates this idea for some reason. We need to fight for our protection and survival.&#xA;&#xA;The Reasons&#xA;&#xA;I’m going to go through how Deakin has treated us since COVID started, but even before that time, there have been issues. Particularly, people were exploited and overworked/underpaid, WAMs were rubbish, and restructures were happening before COVID. None of that has improved.&#xA;&#xA;March 2020: COVID Lockdowns start. Deakin (and everyone else) must rush to have the workforce working from home. Thanks to the hard work of the staff (not management), the migration to WFH wasn’t as bad as some other universities and companies.&#xA;May 2020: Iain has a town hall announcing the first round of redundancies. He cries poor and gives the worst-case predictions of loss of students. He conveniently ignores the university has $500+ million in a “future fund”, as well as over a billion in easily-sellable non-building assets. That doesn’t matter; 400+ full-time/fixed term staff must lose their jobs, as well hundreds and hundreds of casuals and sessional staff. Voluntary redundancies are not offered. They have a hit list, and they go through it, cutting out people via restructure/major workplace change (MWC).&#xA;June 2020: Deakin attempts to give the minimum amount of consultation (2 weeks) on the changes, and takes other actions that go against the EA, according to the union. The NTEU takes them to the Fair Work Commission and wins. Deakin begrudgingly extends consultation and makes some other changes to fall in line with the FWC decision.&#xA;March 2021: Deakin posts their financial report including (surprise surprise), making a profit on the backs of all the staff they fired and the rest they’ve overworked.&#xA;August 2021: Deakin comes out with phase 2 (“Deakin Reimagined” aka “Deakin Disimagined”). Even greater redundancies. Major workplace changes across the board. Already low staff morale goes even lower. Deakin expects even more of the staff as they cause chaos and stress amongst everyone. I believe in the town hall presentation announcing this phase, Iain lies and says that this will happen and then no more restructures after, but I am not sure and will not listen to him talk for an hour to get one snippet. Deakin pretends to take ideas in their Ideas Hub but by all accounts ignores all input and does whatever they wanted with areas. The result is multiple thousands of peoples’ lives thrown into stress and chaos, and near 1000 (I think?) more job cuts, easily double the first phase. Deakin did this to you, with no remorse and no support (no, the employee wellbeing service with their booked-out-for-months psychologists doesn’t count as support).&#xA;March 2022: Deakin once again posts a profit in their financial report. Iain is congratulated for his actions by people who are more worried about money than people.&#xA;Now: Deakin is struggling. Staff are overworked, stressed, and morale is terrible. Some areas are on their third or fourth restructure in as many years, as management seems to just be making it up as they go along. Middle management bullying is rife and university leadership appears to have abdicated themselves of any responsibility. Students are starting to see the cracks as staff are working WELL beyond what they’re supposed to, because they care for the students (and Deakin knows this).&#xA;Later: Deakin will post their 2022 annual report. Any bets that they’ll be posting profits again?&#xA;&#xA;The entire time Deakin has cried poor, and been so dishonest to make up wild scenarios where they’ll lose money (that, surprise surprise, don’t happen), as well as showing profits as “red” on a balance sheet in town halls (red is traditionally the colour for losses). They know they’re making money while they destroy the staff. They just don’t care.. they’re making money.&#xA;&#xA;Talk to any staff member and you’ll find I’m not lying about the above. Deakin is hurting and management’s solution seems to be trying to squeeze more blood out of the staff. During this whole time (seriously, from April 2020) they’ve been planning on people going back into the office, and they’re doing it in unsafe and inconsistent ways. Suddenly the staff that have worked from home for the last few years, successfully, can’t work from home anymore. There is more information here. All of this is unacceptable, and this is our chance to fix it.&#xA;&#xA;It may be a common phrase, but Deakin only runs on the good will of the staff. That’s how Iain gets his \~ $1 million/year salary.  It’s how Deakin exec and councillors sit on $200-300k+/year (before bonuses). It’s how Deakin gets its good reputation with students, its opportunities in other countries, its opportunities in research. The staff. For the last few years, the staff has been bending over backwards helping Deakin keep running. Long/extra hours, working while trying to take care of kids, working without having a proper workspace at home. Deakin staff supported Deakin and helped it grow, despite management continually abusing us and cutting down our workmates. They are exploiting the staff and expect the staff’s care for the students will let them get away with it. Before COVID, depending on where you were, this was the norm, just not as severely. Deakin has always depended on staff working extra hours. They have always depended on staff going above and beyond for their students, but the last few years, they’ve turned it up to 11. With very little thanks or compensation, and no sign of reducing workloads. This is why we’re fighting.&#xA;&#xA;I linked some blog posts above, but you can look at the union reports from bargaining meetings on the Deakin NTEU website. Videos are made after each meeting giving an update, and updates five and six are relevant to how Deakin is ignoring our requests. This happens to align with the strategy put forth by the extremely terrible Australian Higher Education Industrial Association (AHEIA). They encourage delaying, ignoring, and going for non-union ballots to staff to see if they’ll accept a meagre EA. In a couple smaller universities these ballots have gotten up, but by and large they’re being rejected. This is being proven the right move by the benefits the NTEU-negotiated EAs are giving employees for the next few years.&#xA;&#xA;Deakin is a great university. It’s still a great university for the students, but it’s currently hurting, and management seems to have largely ignored the problem, calling any issue raised an “isolated incident” or “a single rogue manager”. This is clearly not the case, but they’re apparently not paid to take responsibility for their actions. Only push forward and stick their head in the sand when told it’s not working. It’s no wonder management scores so poorly on Pulse Surveys.&#xA;&#xA;We can make changes (that Deakin can EASILY afford) that give the staff (full time, fixed term, casual/sessional) a much better quality of life. It’s deserved, and it’s up to the union and Deakin staff to do it, because God knows no one else will.&#xA;&#xA;Links: &#xA;&#xA;Deakin NTEU Page - Actual updates instead of the BS coming from Deakin.&#xA;Join the NTEU!  Help us fight the bastards.&#xA;VC’s Updates (may need a Deakin account)]]&gt;</description>
      <content:encoded><![CDATA[<p>Deakin NTEU Members will soon receive a ballot that allows them to vote on if Deakin NTEU Members should take protected action, and what protected action they should be allowed to take. This is called a PABO, or a Protected Action Ballot Order. I thought I’d write a post on why we need to strongly support this, and what it means.</p>



<p><strong>The Process</strong></p>

<p><em>This is based on what I’ve observed and learned. It may not be 100% correct. I am not a lawyer or Industrial Officer.</em></p>

<p>Thanks to corporate influence on our political system, and the wave of neoliberalism we’ve been living in for the last 40-50 years, industrial laws have declined so far in this country that unions are not allowed to strike unless it’s a specific time (EA bargaining) and after jumping through various annoying hoops (described below). In addition, unions can’t have solidarity strikes anymore, further weakening worker power. Finally, Fair Work has made it so that unions can only take actions that they have specifically told them about, which Fair Work will look at and possibly reject. Breaking any of those rules leads to the unions getting tens of thousands of dollars in fines, and possibly some for the individuals as well. Because of this, we have PABOs. It’s worth repeating that this was deliberately done by the major parties in concert with business. The politicians and the businesses are all doing great while the workers (and non-workers)… not so much.</p>

<p>To start the process, unions and the university must make concerted efforts to genuinely negotiate. The NTEU has done this, but Deakin has stonewalled and given scraps, ignoring the bigger issues. You can see <a href="https://www.nteu-deakin.org/2023/bargaining-update-5/" title="this">this</a> and <a href="https://www.nteu-deakin.org/2023/bargaining-update-6/" title="this">this</a> blog post for more details on that.</p>

<p>To be allowed to strike or do any sort of industrial action, we must first have a member meeting where over 50% of the attending members agree to have the union lodge with Fair Work saying it wants to run a protected action ballot. The NTEU then lodges with Fair Work to hold the Protected Action Ballot. Fair Work generally agrees with this if the members voted for it. This is the point we are at now, as of 20/03/2023. Assuming Fair Work approves the ballot request, they hand over control of the election to the AEC, who runs the Protected Action Ballot process.</p>

<p>When the ballot starts, all members will be emailed with a link to an online voting form. They then have a set amount of time (two weeks, I believe) to vote. This allows the members to vote on each individual industrial action listed in the application. <strong>It’s important to note that this vote is for allowing the union to possibly take the action. It is not signing you up personally to take that specific action</strong>. The union is aware not everyone is able to (or feels comfortable) taking every action, but the more actions we get voted up, the more weapons we’ll have in our arsenal when it comes to acting.</p>

<p>For a PABO action to be successful, 50% of the union members need to vote “YES” for action on the voting form. Also 50% of the members need to vote, overall.</p>

<p>I believe the university could then lodge with Fair Work saying that industrial action will cause undue stress on the company, etc, etc. Normal corporate bullshit.</p>

<p>If the PABO actions are successful, they’re available for the union to use during this bargaining round.</p>

<p><strong>The Actions</strong></p>

<p>The actions start from the very small (putting notes into email signatures) to the very large (fully striking). There are many options in-between there. Every approved PABO action means that you can freely take it without the university being able to take adverse action against you for it. If you strike, they are legally allowed to not pay you for the time you were striking, but they can’t penalise you for striking. If you stop taking library late fees as part of an industrial action, you can’t be punished for that. <strong>This is our opportunity.</strong></p>

<p>No members are required to take any of the approved actions themselves, but it’s important to note that <strong>the more people we have taking each action, the stronger we are</strong>. That strength will be apparent to the university and our fellow staff. Also, the stronger we are from the start, the faster we can hopefully resolve the bargaining, and with better results for everyone.</p>

<p>Some of the actions may impact students. You will see them on the ballot. I understand people that have reservations about impacting the (already underserved due to Deakin management) students, and I sympathise with it. My view is that we’re trying to stop an even bigger problem within Deakin, and the temporary measures will lead to greater long-term experience for both the staff and students. Even if you are personally against actions on the list, I still encourage you to <strong>vote YES</strong>. Others in the university may be able to be in a better position to do it, and discouraging some methods tells management we have a line we won’t cross. If management from the last 3-4 years is any indication, they will use that to their advantage and exploit us. They have previously, they’re doing it now, and they’ll continue to do it if we let them.</p>

<p>This is the time we can make our views known and not suffer a penalty. Deakin management has spent years abusing us (more below), and it’s our time to make sure they cannot take advantage of us in the future. With Iain being hired for another five years, it seems they have more disruption on the table. This is our chance to get provisions put into the new EA. One of the big ones related to this is “staff can only go through one restructure (major workplace change) during each EA period”. Deakin management hates this idea for some reason. <strong>We need to fight</strong> for our protection and survival.</p>

<p><strong>The Reasons</strong></p>

<p>I’m going to go through how Deakin has treated us since COVID started, but even before that time, there have been issues. Particularly, people were exploited and overworked/underpaid, WAMs were rubbish, and restructures were happening before COVID. None of that has improved.</p>
<ul><li>March 2020: COVID Lockdowns start. Deakin (and everyone else) must rush to have the workforce working from home. Thanks to the hard work of the staff (not management), the migration to WFH wasn’t as bad as some other universities and companies.</li>
<li>May 2020: Iain has a town hall announcing the first round of redundancies. He cries poor and gives the worst-case predictions of loss of students. He conveniently ignores the university has $500+ million in a “future fund”, as well as over a billion in easily-sellable non-building assets. That doesn’t matter; 400+ full-time/fixed term staff must lose their jobs, as well hundreds and hundreds of casuals and sessional staff. Voluntary redundancies are not offered. They have a hit list, and they go through it, cutting out people via restructure/major workplace change (MWC).</li>
<li>June 2020: Deakin attempts to give the minimum amount of consultation (2 weeks) on the changes, and takes other actions that go against the EA, according to the union. The NTEU takes them to the Fair Work Commission and wins. Deakin begrudgingly extends consultation and makes some other changes to fall in line with the FWC decision.</li>
<li>March 2021: Deakin posts their financial report including (surprise surprise), making a profit on the backs of all the staff they fired and the rest they’ve overworked.</li>
<li>August 2021: Deakin comes out with phase 2 (“Deakin Reimagined” aka “Deakin Disimagined”). Even greater redundancies. Major workplace changes across the board. Already low staff morale goes even lower. Deakin expects even more of the staff as they cause chaos and stress amongst everyone. I believe in the town hall presentation announcing this phase, Iain lies and says that this will happen and then no more restructures after, but I am not sure and will not listen to him talk for an hour to get one snippet. Deakin pretends to take ideas in their Ideas Hub but by all accounts ignores all input and does whatever they wanted with areas. The result is multiple thousands of peoples’ lives thrown into stress and chaos, and near 1000 (I think?) more job cuts, easily double the first phase. Deakin did this to you, with no remorse and no support (no, the employee wellbeing service with their booked-out-for-months psychologists doesn’t count as support).</li>
<li>March 2022: Deakin once again posts a profit in their financial report. Iain is congratulated for his actions by people who are more worried about money than people.</li>
<li>Now: Deakin is struggling. Staff are overworked, stressed, and morale is terrible. Some areas are on their third or fourth restructure in as many years, as management seems to just be making it up as they go along. Middle management bullying is rife and university leadership appears to have abdicated themselves of any responsibility. Students are starting to see the cracks as staff are working WELL beyond what they’re supposed to, because they care for the students (and Deakin knows this).</li>
<li>Later: Deakin will post their 2022 annual report. Any bets that they’ll be posting profits again?</li></ul>

<p>The entire time Deakin has cried poor, and been so dishonest to make up wild scenarios where they’ll lose money (that, surprise surprise, don’t happen), as well as showing profits as “red” on a balance sheet in town halls (red is traditionally the colour for losses). They know they’re making money while they destroy the staff. They just don’t care.. they’re making money.</p>

<p>Talk to any staff member and you’ll find I’m not lying about the above. Deakin is hurting and management’s solution seems to be trying to squeeze more blood out of the staff. During this whole time (seriously, from April 2020) they’ve been planning on people going back into the office, and they’re doing it in unsafe and inconsistent ways. Suddenly the staff that have worked from home for the last few years, successfully, can’t work from home anymore. There is more information <a href="https://www.nteu-deakin.org/2023/working-from-home-under-threat-at-deakin-and-what-you-can-do-about-it/">here</a>. All of this is unacceptable, and this is our chance to fix it.</p>

<p>It may be a common phrase, but <strong>Deakin only runs on the good will of the staff</strong>. That’s how Iain gets his ~ $1 million/year salary.  It’s how Deakin exec and councillors sit on $200-300k+/year (before bonuses). It’s how Deakin gets its good reputation with students, its opportunities in other countries, its opportunities in research. The staff. For the last few years, the staff has been bending over backwards helping Deakin keep running. Long/extra hours, working while trying to take care of kids, working without having a proper workspace at home. Deakin staff supported Deakin and helped it grow, despite management continually abusing us and cutting down our workmates. They are exploiting the staff and expect the staff’s care for the students will let them get away with it. Before COVID, depending on where you were, this was the norm, just not as severely. Deakin has always depended on staff working extra hours. They have always depended on staff going above and beyond for their students, but the last few years, they’ve turned it up to 11. With very little thanks or compensation, and no sign of reducing workloads. This is why we’re fighting.</p>

<p>I linked some blog posts above, but you can look at the union reports from bargaining meetings on the <a href="https://www.nteu-deakin.org/">Deakin NTEU website</a>. Videos are made after each meeting giving an update, and updates five and six are relevant to how Deakin is ignoring our requests. This happens to align with the strategy <a href="https://www.theguardian.com/australia-news/2023/mar/01/australian-universities-advised-to-avoid-being-roped-into-multi-employer-bargaining-leaked-strategy-reveals">put forth</a> by the extremely terrible Australian Higher Education Industrial Association (AHEIA). They encourage delaying, ignoring, and going for non-union ballots to staff to see if they’ll accept a meagre EA. In a couple smaller universities these ballots have gotten up, but by and large they’re being rejected. This is being proven the right move by the benefits the NTEU-negotiated EAs are giving employees for the next few years.</p>

<p>Deakin is a great university. It’s still a great university for the students, but it’s currently hurting, and management seems to have largely ignored the problem, calling any issue raised an “isolated incident” or “a single rogue manager”. This is clearly not the case, but they’re apparently not paid to take responsibility for their actions. Only push forward and stick their head in the sand when told it’s not working. It’s no wonder management scores so poorly on Pulse Surveys.</p>

<p>We can make changes (that Deakin can EASILY afford) that give the staff (full time, fixed term, casual/sessional) a much better quality of life. It’s deserved, and it’s up to the union and Deakin staff to do it, because God knows no one else will.</p>

<p>Links:</p>
<ul><li><a href="https://www.nteu-deakin.org/">Deakin NTEU Page</a> – Actual updates instead of the BS coming from Deakin.</li>
<li><a href="https://www.nteu.au/Join_Form/default">Join the NTEU</a>!  Help us fight the bastards.</li>
<li><a href="https://deakin365.sharepoint.com/sites/Network/_layouts/15/news.aspx?title=From%20the%20VC&amp;newsSource=3&amp;instanceId=3fa7fdf8-f85b-42d4-8575-c6feb9474706&amp;webPartId=8c88f208-6c77-4bdb-86a0-0c47b4316588&amp;serverRelativeUrl=/sites/network&amp;pagesListId=8b3ef5fb-b908-4c09-b0c7-237c2cdbbb93">VC’s Updates</a> (may need a Deakin account)</li></ul>
]]></content:encoded>
      <guid>https://blog.joyrex.net/voting-yes-for-pabo-at-deakin</guid>
      <pubDate>Mon, 20 Mar 2023 22:38:08 +0000</pubDate>
    </item>
  </channel>
</rss>