Feedgrid is a news reader app for displaying RSS feeds in configurable 2-D grids, to get a bird's eye view of your interests. Learn more ...
You can browse some example grids on this page, or create your own grids with a user account.
In this post we will discuss ‘How the attacker uses the Microsoft office for phishing attack to get the NTLM hashes from Windows.’ Since we all knew that Microsoft Office applications like Word, PowerPoint, Excel and Outlook are the most reliable resource for any organization, and an attacker takes advantage of this reliance to masquerade the user.
Here, we’ve been trying to explain what a different approach an attack uses for a phishing attack to capture Microsoft Windows NTLM hashes.
In actual fact, the attacker tried to use the UNC path injection technique to capture the Windows NTLM hashes and use phishing to achieve his goal. Abusing Microsoft Outlook 365 to Capture NTLM
In late October 2022, we became aware of CVE-2022-41140, a buffer overflow and remote code execution vulnerability in D-Link routers, which D-Link had been notified of on February 17th. Noting the months-long turnaround time, we decided this was a good chance to perform a learning and discovery exercise.
On March 13th, 2023 we became aware of CVE-2023-24762, a command injection vulnerability in D-Link DIR-867 devices. This recent CVE spurred us to share some of our internal documentation regarding a research spike into D-Link devices.
This blog aims to explain the process of gaining a foothold in firmware or a physical device for vulnerability research and achieving a debuggable interface. While existing Proof-Of-Concept code for (yet another) D-Link vulnerability CVE-2022-1262 is utilized within this document, as well as strong hints at suspect areas of code, don’t expect to find any new ready-to-fire exploits buried in the contents below. Debugging D-Link: Emulating firmware and hacking hardware
I have a beating heart, a healthy body and a mind capable of dreaming. This is all I need. No greater comfort will ever exist.
It’s a good time to learn to garden, I guess. The internet was fun for a couple decades, but soon it’ll be cable TV-grade through and through.
Search engine crawlers no longer function as they once have, they have been neutered to push you towards sites that fill the narrative. Most web pages are now lost or difficult to find if you don’t know the link. Internet etiquette has degraded. Corporations have decided they must have control over every aspect of the Internet. The amount of sites the average person visits has dropped dramatically due to growth of big tech websites, competition is non-existent or irrelevant causing stagnation. Good luck getting anyone to use your new site without controversy of a major player on the field to make people switch.
You feel like you are talking to bots? That’s because most people are literally walking bots and their iPhone has granted them access to the net without broadband or a desktop; unleashed to discourse their undesired opinions and thoughts.
Our ‘purpose’ is not to work 40 hours a week so someone else can live a better life.
The American ruling class has been utterly divorced from the consequences of their reckless interventions, and because the strength of their MSM propaganda is so powerful, I believe they have collectively ’lost the plot’, so to speak, and have begun to like the smell of their own bullshit, that is to say, they are completely taken in by the astroturfed rhetoric that is meant to manufacture consent among the public at large, i.e. the proles, and the petite bourgeoisie. Because they actually believe their own ridiculous lies about ‘human rights’ and intervening to ‘protect freedom and democracy’, they keep implementing failed strategies that were only ever meant to be used for expedience, and when relevant. They will keep trying to fuck around with Russia, China, Iran, DPRK, Venezuela, and so on until they are utterly and completely dismantled. Soon the only people left on the world stage who believe the Americans will be the Americans themselves.
This cheatsheet is built from numerous papers, GitHub repos and GitBook, blogs, HTB boxes and labs, and other resources found on the web or through my experience. This was originally a private page that I made public, so it is possible that I have copy/paste some parts from other places and I forgot to credit or modify. Active Directory cheatsheet
♫ I’d rather be caught livin’ cozy in muh shack,
A shack inna woods instead of in da hood,
No cotton-pickin’ paper money held up inna bank,
Prepped for the Feds end with silver in the tank!
Under water that is... with the fish... cold. ♫
The Ballad of CozyBroz
Yeah, das rite bros... Landlord Chad here, wat up?
So this is a more comprehensive blog detailing what all I went thru to buy muh house. It took a long time, but everything has worked out in my favor so far.
Going back, this all began late October. I saw the house show up on Zillow, and it met some of my criteria.
I drove out to take a look at it and find that one side of the house is a disaster. Melted vinyl hanging and some windows damaged; not good. The rest of the house looked alright though, and there was no visible fire damage to the structure, other than the vinyl. I scheduled a walk thru and the inside of the house looked to be in good nuff shape! Nothing at all like the pictures of the other houses that I’ve posted on here before. The property also has a beat up metal barn in the back and a well pump house.
The seller was asking for $100K, which was a bit too much in my opinion so I haggled them down to $90K, figuring that I could try and get it financed. Still, $90K is way too much for the property given its age, location, and depreciated condition. I figured that it would be a stretch to see if it could finance, and their Realtor seemed to think it would. If it worked, then fine nuff with me, I don’t care! I can pay that off in like 5 years so long I’m frugal with muh money; and if it didn’t, then I could use that to negotiate a lower offer in cash.
After a month and a half, of course it failed the appraisal and didn’t finance. The lender seemed a bit upset too, as if I would dare even try to get them to finance such a dump; or at least that was the impression I got from em. Oh well, so be it!
I submitted a form of termination and told their Realtor that since I couldn’t get it financed, that I could give them a cash offer, but that it wouldn’t be nearly as much as what the seller was asking for. The Realtor was disappointed and said that the appraisal didn’t seem right, and asked if I could try another lender. I didn’t really want to since I was getting tired of it, but I have nothing to lose and agreed to try another lender.
After another month, of course if failed the second appraisal too. The lender was also kinda sus and kept calling me bud and buddy. “What’s up bud,” “how ya doing buddy?” Maybe he was Canadian or something?
Also during this time, the insurance agent dropped out on the home owners policy because they didn’t like the melted vinyl and said that I would have to find a provider that would sign for a builders risk policy. I kinda figured that would happen, but wished they were more on the ball with it than deciding two and a half months later!
I don’t actually hold any crypto. Instead, I've just been saving cash. Thought picrel was sorta related to topic. Trying to pad things here.
It’s now January and I’m pretty much fed up with talking to lenders and insurance agents and I wasn’t about to go for round three. The place is obviously in no condition to be financed, which doesn’t really matter much to me either way. This has been more or less a stall tactic to whittle down the asking price and patience of the seller for the long game!
So I terminate the contract and submit a cash offer of $57K. The Realtor puts the house back up on the market to see if they can reel in a better deal, which by this point they’ve missed the boat in selling over priced shid shacks innawoods. About a month goes by and the Realtor asks if I can go any higher. I bump my offer up to $60K and tell him that’s as high as I can go due to what I have in the bank that’ll go toward repairs. The seller accepts a few days later and we sign a new contract!
I quickly call up an insurance agent that I’ve kept waiting on the sidelines for a builders risk policy and get insurance coverage squared away. During closing, there was some delay in processing the wire from the sellers bank. After 24 hours, I was getting mega pissed off thinking they made off like a bandit because they were behaving as if they hadn’t received the funds while my bank said that it was certainly sent! And before any of you ask, I did wire the funds one day in advance to avoid any delays; So essentially, not my fault!
Eventually it cleared, but by this point I was sus as shid and began to wonder if everything wasn’t a setup. I did verify the Realtor's license way back and he was on the up-and-up as well as the property tax records, but I didn’t ever verify the dang Title Company! But it all cleared up and a week later I got my deed and verified it with the county recording office.
Sometime soon I can finally get chickens and schizo larp; it'll be fun bros!
Now that the house is mine, I’ll begin the much needed repairs and restoration ASAP! It’s gonna take a little while to get it back into shape because I don’t have that much money left after buying the place. I did plan to do many of the repairs myself for the outside, but after seeing some recent bank collapses followed up by a bank run and the fed assets balance sheet shoot up even higher to cover for retard depositors and soon to be moar inflation, I’m just gonna blow it all now bros. Nowhere is safu, except for maybe boomer rocks.
I think 2023 might just be the year to spend all of your saved up cash now or forever say goodbye to it, unless the “war” really works out in our favor. Doubtful, but it’s possible.
I hope I can figure out a solar powered well pump setup. If that does become a problem in the future, there is a stream of water about a mile off and a few other sources of water nearby. I would just have to boil it if I were to drink it and accept my fate in knowing that I’ll likely get cancer from pesticide runoff as I get older. Sometimes you win some and sometimes you lose some bros.
Thanks for reading my blog!
Spring is around the corner! Spring might be my favorite season of the year, next to winter. I don’t have a whole lot planned except for moar house stuff and moar blogs. I was working on a podcast episode, which is pretty much completed but now I’m hesitant to release. It might be a lil too controversial given the subject material that I rambled on about; so I’m reconsidering the first episode. Might just can it or something, I don’t know…
Back on the house search front; I should probably have mentioned that the seller of the house I submitted a cash offer for about a month back, which I was unable to finance, did finally accept!
I’m pretty excited since this will be my first house. I just hope a tornado doesn’t blow it away or someone doesn’t lite it on fire while I’m away. I would like to share pictures of it, but then again you could easily look it up on Zillow to then abduct me and do terrible things, so I’m just going to hold off of that for now. At least until I get it restored! In the meantime, I can describe it to you in detail and provide a facsimile of the house below.
This is pretty darn close by comparison. The only difference is that my house has two large dormers and a back half that's roughly the same size as the front porch. It's also covered in vinyl and looks a bit ratty at the moment. This is what it would of looked like in its heyday.
“This quaint Lowcountry farm home in the rolling hills of Texas has been transformed by its previous owners over its many storied years into a pleasant Redneck vernacular. It's adorned by a loosely fitted unwashed cadaver gray vinyl décor, a splendid variety of unmatched vintage aluminum framed windows, and an expansive yard view tastefully decorated in a rustic faux-parolee chique. This one of a kind home in the country side can be yours for just $60,000.”
In other words, it looks like ASS! But I plan to restore it back into the beautiful home it once was. I don’t know what the deal is with nasty gray vinyl, but the place is covered in it as well as some of the linoleum floors. If you ever decide to cover your house in vinyl, then choose something with earthen color or go with pure white. Gray just makes everything look dead, depressing, and sad; no soul.
The inside of the house is so so, and will need some work too, but that can wait until the exterior is restored first. I’m not a fan of linoleum, which is used in many of the rooms. It appears that the original barn-wood floors were removed in most of the back rooms and second floor, so that’ll be a challenge to restore.
The second floor was renovated at some point and hasn’t aged well at all. It’s slathered in wood panel siding, linoleum, and a hard plastic insulated drop ceiling. The style clashes badly with the rest of the house, so I intend to remove all of that and insulate the attic instead.
First order of business will be to have the roof re-shingled, restore electrical, have the well fixed, and some of the plumbing redone and sewer possibly replaced. That alone will cost me everything that I have left, so the rest of the restoration will just move along slowly.
I’ll share pictures of it as I go along, so that’ll be fun! I’ll also write a longer blog post on all the bullshid I had to go thru just to get it. It’s been about 6 fuggin months, no lie bros! Anyhow, that’s all I’ve got for now.
Thanks for reading my blog!
The site you see before your eyes was made in VIM, and quite easily too! I’m not using a static site generator because I think it’s easier to just get it done with VIM. Buuuut, I’m not using simply VIM alone; there are a few other things at work here.
As you may have already seen, I’ve been using PHP scripts to generate some of the content that you see here. Since these scripts are centralized to a directory outside of the CozyNet webroot, this makes it easier to apply smaller changes down the road without having to touch every HTML document.
Additionally, I make use of basic HTML documents as templates and combined that with the functionality of the Marvim VIM plugin. Marvim is pretty much a VIM macro storage and recall plugin, where you can save your macro or even a selection of text to recall for later.
I wanted to demonstrate Marvim in a few scenarios so that you could get an idea of its functionality, especially with making web pages.
Let’s say that you want to slap down an image element in a document without having to type it all out manually. With Marvim, you can call up a saved example of this element with all the little bits and bobs.
Another scenario could include a table with a row and a few extra things.
As you can see, it saves on a lot of typing!
Step 1: Download the marvim.tar.gz file from this link. You’ll need to use the 0.4 version. The 0.5 Beta version is busted crap and won’t work, so don’t even bother with it!
Step 2: Extract the contents, tar xfv marvim.tar.gz.
Step 3: Create a “plugin” directory within your ~/.vim directory, mkdir -p ~/.vim/plugin.
Step 4: Copy the marvim.vim file into the ~/.vim/plugin directory.
Starting up VIM will cause the marvim plugin to auto-generate a ~/.marvim directory. Based on the file type from which you'll save a macro, it will create a corresponding directory for that file type that will contain the macro templates. So for example, if you’re editing an HTML file in VIM and decide to save a selection of text with marvim, it will store that selection in ~/.marvim/html.
I’m just going to repeat what’s on the VIM plugin site here. It’s pretty simple to use!
Store a new macro to the repository:
Save a template into the repository:
Recall a macro/template by searching:
Replay the previously loaded macro on multiple lines for each line:
I am aware of other methods in editing web documents, but I still prefer VIM. Since I don’t intend to make any major overhauls to the site design, this has worked out pretty well and I suspect that some of you could also benefit from this approach.
You can register multiple macros to various keys to create a pretty fluent web editor out of VIM. As I’ve already said, combining this with PHP scripting and document templates goes hand in hand.
Thanks for reading my blog!
This one is another big post with a lot of images in it, so to keep from bombing your feed reader, just click the link to the site instead.
There’s a lot in this blog post that might not translate well into a feed reader (i.e. code / scripting), so I recommend reading it on the site itself.
See you there!
Is it just me, or are the chocolate candy tasting kinda crap these days?
Last week was a BUSY week for the site. Seeing the rise in traffic, I spent a lot of time tightening some things down security wise since I saw people in the logs poking around at stuff. I was being lazy and didn’t really care about it until now, but man am I glad that I at least completed the comment captcha form before then!
About 1,442 unique visitors for 2023-02-08! I'm sure there's some bot traffic in there, but it's a lot more than the usual 230 avg. So far the VPS handled it well.
The IRC was bustling for awhile too. I also updated the KiwiIRC web interface and gave it a really cool welcome screen.
Anyways, after shilling my video “Boomer was here” in a few /WSG/ threads, it eventually made its way into the Tweeter normie sphere which saw a lot of eyes and brought lots of traffic. I’m glad to see that it made people laugh and happy!
For any new readers here, welcome to the blog! So, like the first paragraph of the front page says, this is just a place where I post random things. I mostly post about tech stuff, some housing stuff because I’ve been looking for a house, interesting threads that I find on the boards, and mundane things like the choco chip cookie recipe. It can all be quite random and sometimes spur of the moment. Hypothetically speaking, it is completely possible that I might just out of nowhere bring up a topic that nobody asked about.
I do have some things in mind that I would like to write about, but then I get easily distracted by other things and then forget what I was doing or become disinterested. But I’ll get back to it soon™. Just wanted to make this post to say thanks for all the comments, I’ve read em. And also...
Thanks for reading my blog!
Happy valentines!
This one is another big post with a lot of images in it, so to keep from bombing your feed reader, just click the link to the site instead.
I've updated the RSS feed today, making it hopefully a little more compliant to spec with W3C. There are a few things missing, like the GUID's. I couldn't really figure out how to get those working, but they're not that important.
I've also cleared out the old feed items, so we're starting fresh! Don't worry about missing out on anything, it's all still on the blog page. This feed is really only intended as a notification service, so the contents are temporal and will be rotated like a log over time.
I wrote a couple of little tools a bit ago, and I'm moving them to here from the front page of galladite.net to tidy it up a bit.
I had to go into a liveusb to fix my gentoo kernel so I made this script. You can run it with "bash <(curl -s https://galladite.net/resources/genenv.sh)". Always remember to curl a file normally before running it with a shell.
Wpa-tool is a bash script I wrote to make command-line control of wpa connections easy, including commands to configure, connect to, disconnect from and automatically reconnect to networks, built atop wpa-supplicant, its only dependancy. It is highly recommended that you read the comments at the bottom of the file before using. It is very messy.
Here it is:
if xhost >/dev/null 2>&1; then pfetch; else startx; fi
Make sure that you have xhost installed (gentoo: x11-apps/xhost) and pfetch if you want it, then put this into the bottom of your shell's rc file (like .zshrc) and ensure that /bin/sh links to another shell (like /bin/dash) (so that scripts starting with #!/bin/sh won't run startx or pfetch), and you're all set.
This script will start X (using startx, see STARTX(1) and XINIT(1)) when you first log in, although you can get back to the login shell with "pkill x", and whenever you start a terminal it will run pfetch before dropping you into the command line.
All of this is done with only 1 line of shell, and can be customised to run whatever commands you want on exclusively after login or exclusively while in a graphical environment.
Published: 2022-8-28
A Nokia 3310 3G TA-1022 (which from now shall be referred to as "the 3310") was given to me for free, and while .mp3s work fine, .mp4s don't work with only the sound playing. Nokia's official help page is no use, only stating that "not all video formats are supported".
Upon searching through forums, I discovered that the 3310 uses .3gp video encoded with h.263 for some reason. Here is a script to convert a file using ffmpeg (copied below).
#!/bin/sh
for i in *; do
ffmpeg -i "$i" -s 352x288 -ar 8000 -ac 1 -vcodec h263 -f 3gp ${i%.*}.3gp
done
"-i" takes the input file
"-s" sets the resolution. H.263 only accepts specific resolutions and this is the smallest which is still larger than the 3310's 240x320 screen.
"-ar" sets the sample rate of the audio. Again, h.263 is specific.
"-ac" sets the audio channel count to one (mono). H.263 stuff.
"-vcodec" and "-f" set the encoding and format to h.263 and 3gp
The last bit saves the processed file as its previous name but with the .3gp extension.
It is worth noting that ffmpeg must be compiled with amr (see gentoo wiki USE flag)
After being converted, videos may be copied to the 3310's tf card or internal storage as usual. While 3gp and h.263 are a hassle, filesize is reduced considerably to the point where using the internal storage of 32MB becomes a viable option for videos spanning only a few minutes. For example, a 6 minute MV which I had downloaded from youtube became only 10MB.
Published: 2022-7-1
When I was deciding what distro to use before making the switch to linux, the minimalist ones all seemed similar. This is a breakdown of some of them, what sets them apart, their use cases and their pros and cons.
The distros I'll be comparing are arch, debian, void, nix, guix, gentoo and slackware.
Features: binary-based, uses pacman, uses systemd, AUR, bleeding-edge
Pros: popular, good wiki, good support
Cons: systemd, medium dificulty to install
\--> systemd-free version: artix
\--> see also: parabola, hyperbola
Features: binary-based, uses apt, uses systemd, stable or testing versions
Pros: rock-solid stable, very good support, average dificulty
Cons: systemd, stable has older software
\--> systemd-free version: devuan
Features: binary-based, xbps-src to build your own packages, uses xbps, uses runit, stable rolling release
Pros: no systemd, easy to install, very minimal, good for tinkering
Cons: less popular, less support
Features: binary-based, uses nix, uses systemd
Pros: nix, reliable, reproducable, good work environment
Cons: difficult, not that common, less support
Features: binary-based, uses guix, uses gnu shepherd
Pros: nice package management features, hackable, adheres to GNU FSDG
Cons: difficult, not that common, less support
Features: compiled packages, uses portage, uses systemd
Pros: use flags, very minimalist and fast, customisable, good for tinkering, amazing wiki
Cons: slow to install packages, very difficult, takes a lot of time
\--> openrc version available
Features: binary-based, uses pkgtools and slackpkg, uses slackware-init
Pros: very "UNIX-like", no systemd, very stable, great to learn linux with
Cons: not the easiest, package dependancies are not automatically installed, quite old
Best server: debian (easy and stable), gentoo (fast), slackware (alternative to debian)
Best desktop: arch (popular and up-to-date), void (what I use)
Best for tinkering: void (v. minimal, runit), gentoo (use flags), guix (Guile Scheme APIs)
Best for development: nix (isolated environments, rollback), debian (common, there are many debian-based distros), arch (same as debian)
Best for learning: slackware (UNIX-like, very "vanilla"), gentoo (throwing yourself into the deep end)
Published: 2022-5-22
Ah, holidays. Amidst the pandemic and the total chaos, I found a place to relax and re-organize my life. I'm one of the privileged people on earth that can do this right now. To everyone facing the severe consequences of our time, my thoughts are with you. I wish I could do more to support you, so feel free to contact me if needed.
Anyways, since I have time on my hands, I decided to clean up the residue of being a netizen for over 10 years. Old accounts in web applications, most of them in company silos that I don't want to endorse in any way. Starting with my online git repos, I deleted my github account. After the microsoft acquisition I stopped using my account and so it is a surprise that it survived so long.
Why I want to stay away from microsoft products shouldn't be a surprise. Richard Stallman has a collection of facts that prove that using them is in fact being used by them1.
I also deleted my non-active twitter account, my gmail account, dropbox and all other of services that I signed up for all these years. It feels liberating. At the same time I don't have illusions. I've fed a lot of personal data into these machines and my privacy is terribly compromised. This is not a cause for despair though. At any point we can reclaim some privacy back and deleting my accounts is a step to this direction.
Modern web disservices are insatiable monsters that devour our data. Given the power that is hidden in these data, it's safe to assume that they will be used for malicious purposes. That's what history has proved repeatedly.
An interesting hack would be to spam these services with useless and fake data. I think this kind of activism is one of the answers to surveillance capitalism. If anyone has any ideas on how to successfully create such a distraction at a large scale, I would be most willing to help implementing and utilizing it.
A friend came around and brought me this documentary today, since he knows I'm interested in the whole digital society / surveillance capitalism issue. For the uninitiated, the documentary explores and presents the ways in which social media and other recent technological 'products' and 'services' affect our reality. Generally it was quite informative and I would recommend it for people that want a idea of why it is an issue, and how things work in a broad sense.
I'm jacking up on this blog though to share some thoughts on the situation. There's a whole lot of ways that the mass psychological operation that we are involved in can be seen and I feel the historical context is severely lacking in the documentary. So let me buckle up as I try to dig into my brain and attempt to organize my thoughts in a (semi-)presentable way.
Starting with the negatives - and hoping to creatively transform them until the end of this article - the main grind-work of the machine is the addiction of its users. The constant gratification of the upcoming notification, post, whatever is not a happenstance. It's a process that is one of the main tools of social media, trusted upon algorithms that toy upon us to maximize 'engagement' and so making us giving them more.
The process creates valuable data on the parts of our self we don't ever realize. This is then used by advertisers to shift our behaviors into needing more and consuming more. It's a good business model if you ask me. Why try to create good products when you can just fix people to want the ones you want to sell?
This model was the answer of how social media and other web companies would monetize their users. Since selling digital creations failed, this new kind of extracting value emerged. But when the door opened they looked inside and decided that it was good - for their pockets.
So trying to get more users and hook them up in the dopamine hit of connecting with others and being recognized or liked or even just accepted, opened the sector of a whole industry of human exploitation. The one where growth comes through brainwashing and values that are forced into the back alleys of our brains.
What happens if you indulge yourself in the thrill of being liked? What happens when it's normal to feed your self-image through thousands other people? Well it's easy to guess. We left behind the self in exchange for acceptance, but then they didn't want us to feel accepted. Since if we have most of what we need we don't buy things. No number of retweets is enough, no number of followers is ever satisfying. And in the meantime we develop ourselves in the image of the advertiser. We want more and more, to prove that the world shall turn its eyes on us. And we contemplate the world to see when this happens.
It's not shocking that teenagers, in their ever stressful research of the ego, have it the hardest. Growing up in constant criticism of the appearance, of the mannerisms, of whatever is the latest fad is a heavy burden. Heavy enough that self-harm and suicide has skyrocketed in teenagers. 1 This growing depression on vain reasons is not restricted to teenagers though. Everyone wants to be accepted. The one who controls how you should be to be accepted, is the one that controls your actions.
This naturally leads to the next level of the rabbit hole. If you have managed, through the analysis of massive amounts of personal data, to have a degree of control on people's actions, why use these powers just for advertisements? Anyone capable of this and sociopath enough would use it for jamming everyday life.
And the jam is on! Oh, and it's good. It's the best kind of propaganda, the one we were waiting for. If I'm a conspiracy theorist I get pizzagate and Qanon, if I'm a liberal I get force-fed cancel culture and identity politics. The best kind of propaganda, is the one that feeds on what I am and pushes me a step further. The one where my actions seem the natural development of what I am. How am I then going to believe that I was ever brainwashed?
See what happened in Myanmar 2, in elections in countries big and small alike, what happens now in the streets of USA, what happens with coronavirus, what happens in general in our socio-political environment. The digital bubbles have trapped the consciences inside of them, building echo chambers that nullify our thought processes and feed the animalistic part of our brains in benefit of the few.
Most of the previous are raised adequately in the documentary. An important missing point for me is the effect of pornography on our erotic lives. Pornography, as well as the whole pack of dating apps have intruded into the deepest parts of our connections with others, they have marginalized and quantified them. I won't really expand on this, as I'm working on an article that goes deeper into the issue, but I think it's a notable omission.
So that's the situation, and here ends my agreement with the views in The social dilemma. In general I agree with the problems that are presented, but I'm not really convinced of the solutions proposed.
The main consensus in the documentary seemed to be that the explosion of social media was something that was created on good will but had unexpected consequences. I don't buy that to be honest. If the creation of such media was accidental, was it also accidental that we were so ready to lose ourselves in them?
Let's take things from the start of the industrial revolution. It was a point in history where for the first time work was interconnected in such a high level, that a mass of people started to be completely dependent on a system of value for survival. This point of course didn't happen overnight, it was developing since the ancient times of our culture. This organic dependence layed the groundwork for the need of control on human behavior.
To be a good worker in the factory you needed to follow orders, to keep a schedule. The creation of huge cities resulted in people having less of an effect on their environment. A voice in millions is no voice at all. Being crammed in public transportation or stuck in traffic has the same impact on our psyches as when our latest post is not well received. The creative force that is demanded by survival was now the domain of select few architects.
Thus started the biggest experiment of mankind, the conditioning of people as a hive, driven by the greed of the few that had the power to make choices. Schools, universities, institutes, banks, laws that demand to live a very certain lifestyle. Why shall the government demand that I protect myself? But it's obvious, that if I'm treated like a pawn, I'm protected like a pawn. I'm not allowed to take drugs - at least the ones not accepted by the system -, to drive without a seat-belt and so on.
So when the demand of workers was outsourced to 'developing' world countries, and the people in west started to have time to play, the system was searching for the new concept that like work would keep the masses pacified. Who has the energy to live when working 12 hours a day?
It was at this point that consumerism thrived. If you always need more of what the system offers, you need to support it with all your livelihood. So advertising and 'growth' and engagement became the gospels of the recent world. Value was now the endless shitty products that one would amass, the constant wetting of one's appetite for more and more.
That was how we were ready. At least some of us. It is strange but I sometimes feel that in a weird Nietzsche twist the world became totally nihilistic, but with two different expressions. The first expression of modern-day nihilism is the happy participant in the system, the over-eager pawns that offer themselves as sacrifices in the ritual. Accepting that making money for your employee is the highest value in life, they reject their lives, working long hours to create stuff that are returning to damn them. In this category we can find the people that created social media and such services. Success even at the absence of self. Hyper-caffeinated freaks that took the race far too seriously. These are the people that when their freaky schemes of surveillance get bothered will tell you that it will hurt the businesses, especially the small ones, as if they are entitled to disect our lives for their profit. Or as if profit is a higher value than life itself. Adam Mosseri of instagram is certainly in this group.3
The other growing group of nihilistic behavior contain the ones that totally reject social life. NEETs, hikkikomoris or whatever the terms, they are absent from life, since they believe that the system is what life is. This misconception is making people who disagree with the obvious madness around us to reject themselves, by nullifying their presence in the world. Hairless angels in dark basements, demanding to be nothing than participating in what they are offered.
What we in both groups have missed out though, is that in or out, none is out there to save us.
Truth is slippery. Here comes the good part. The biggest issue that I had with the documentary is that the main suggestion is to try and get legislations and control and law and all this nice stuff to save us. Let me repeat myself. None is out there to save us.
As I discussed in the previous segment, there is some historic continuity with the current situation. There is however a big breakthrough recently, that makes the current situation quite fun actually. The new element is the dissolution of the absolute truth, of the objective idea, value or morality. Through the creation of endless safe spaces, for all kinds of ideas, the dis-social media directly challenged truth.
We are now fed only the information that enforces our views, our truths. This has quite the impact on reality, hence post-modernism for all. The left considers the right dangerous and vice versa. What is then needed is to build a big enough prison, so that we all fit in. (Wait a minute…)
I will be quite provocative for a bit. We have no rights. Democracy is rigged and a facade. Don't trust anyone over 27. I'm 30. Yikes. What I'm trying to say is that I'm bored of people seeing a problem and proposing to return to the past for a solution. Probably has to do with the curse of the Greyface or something. Download the Principia Discordia for instant enlightenment, you will thank me.
OK, so now that my rant is over let me explain myself. Since the creation of national countries there was a central truth that was enforced by a certain institution, whether that was the church, the government or the Illuminati. Proposing to solve our current situation by the return of the central agency, especially with the new tools in place is a very bad move, in my opinion. If we give the power to governments to regulate the usage of networks and facts, what is the guarantee that things will be better? I'm sorry but I'm quite bitter on this, as it was the main proposal in the documentary.
The system that created this exploitation, was blessed by governments and institutions of all kind. When they saw what facebook accomplished in 2016, during the USA presidential elections they were rubbing their hands thinking of the possibilities.
My suggestion is don't buy it. We have no rights, if we have no power to protect them. The only way forward is by embracing the divide and learn to co-exist with those who think different from us. To do so though, we need to transform the very values that are ingrained in us. We need to reject success, to re-define value, to reclaim our space in nature, to liberate ourselves from identities and roles, to reprogram our correspondences.
If we keep clinging on them, we will create even more traps as we try to solve the current ones. As much as I despise the rapist that rapes a woman in order to leverage her a career, even more I hate the culture that someone needs to fall prey to others to achieve success. I hope my phrasing is clear, and doesn't imply any kind of blaming the victims or rape. What I mean is this concept of success is fake.
I promised fireworks and hope I can deliver! All the analysis is pretty but lives a bad taste in my mouth. Life is wonderful and has so many things to explore, to learn, to share to experiment. Even when all seems bleak and ominous I can't stop seeing the beauty. People everyday challenging the world around them, experimenting and doing their best to have fun and share it.
That's my main belief and what I see as the power of the human animal. We just need to go out there and play more. People need adventure and games, but the system gives them safety and work. (Is this Nietzsche again?) Nothing is lost, we have still our connections, our dreams, our appetite for what's true.
We can't let some greedy no-lifers make us suffer. We can't let our sisters and brothers feel lonely and powerless. We have to travel the path of searching, to live the experiment of ethics and aesthetic. Here from my small place in the cyberspace, I reach out to you and say in the most comedic voice I ever managed to produce "Be yourself. It's gonna be daijoubu".
Think for Yourself, Schmuck! (If you say this 5 times in front of a mirror at exactly 23:23 at 05/05 Celine Hagbard will come in your dreams and give the best recipe for mashed potatoes.)
These are some thoughts I had during the whole issue that led Richard Stallman to step down from FSF presidency. They may be out of time, but seeing that the cancel culture is only spreading, and many times endorsed by Big Tech, I thought I would share these thoughts here.
These mostly answer on the call for RMS to step down from GNU as well.
If I'm allowed to jack in with a little comment here. GNU is not something official, it is a project, an initiative and specifically RMS's initiative for creating a free software system.
So in that case, I find it obvious that the current leader and founder of it is not accountable to anyone. It's his project. I don't really understand why people want to get him out of his project. If you don't like it just create your initiative and stop creating GNU software. It's ok, sometimes we can't get on with everyone. The fine people at Software Free Conservancy did so.
I still don't like what happened in FSF. But then it was its choice. RMS seems that he still wants to lead GNU. It's his project and he doesn't seem to care a lot about marketing so I don't think he will let go. People who contribute to GNU are assumed to at least be in line with the minimal requirements for doing so. If someone wants a more community oriented governance they can create their own structure.
I would be very willing to hear why FSF shall cease to support GNU. Is RMS some evil figure that will destroy everything? Is GNU not true to its principles or the principles of FSF?
I truly accept that maybe the GNU community is not for everyone. That maybe some choices weren't really inclusive and that alienated parts of its community. I haven't personally observed that but I still believe that this is probably true, since lot's of people have said it happens. I accept that some people involved would like to have more power in what happens and how things are managed. But they were never promised that this would happen.
So please, and in truly good faith, people that feel so start your own movement. With the CoC you want, with the model of governance you want, fixing all the mistakes of GNU and FSF. But trying to hijack another movement? I find GNU acceptable under RMS' governance so I choose to participate in it. But then you may find it unacceptable. At the same time more initiatives are good. This way even people that have different values and ideas can still participate in free software communities. This would be wonderful. Imagine a group of left-wing free software hackers, a group of right-wing free software hackers, a group of non-political free software hackers (if such a thing exists), anarchists etc..
We don't need a mother-ship in free software. Decentralize the movement so it can have the widest possible appeal. That would be great for everyone. People would participate in communities they feel they can actually express themselves into.
What happens the last months only works against free software. We are just attacking each other. This needs to stop in my opinion.
For all I care, GNU could go down and burn and I'm not holding RMS in any special status. I don't think that any human can be held as a moral compass. He is just human. But trying to delete the man from the movement he started and offered his life in, it's sad. He has all the right to run the movement like he wants to. If you want something different fork, stop nagging. That's the free software ethos, that's the community ethos, that's the hacker ethos.
When I see people working in Salesforce, or worse, trying to hold RMS accountable for his commentary on what is rape or not, I can't help but laugh.
I'm writing this wall of text as an outcry of what happens lately. All in good faith. I try to understand all your reasons, I just don't agree with your actions. I would like to hear why you believe that it would be of value for the free software movement GNU getting hijacked instead of another movement spawning. Why cause conflict between people that feel aligned with GNU/RMS and those that don't? We work on the same thing. How we organize and engage in our communities is our choice. Why is that we shall have just one type of community?
So another social post, but what can I do when inspiration hits my door? This piece was conceived after the suggestion to read a piece of Gilles Deleuze called "Society of Control". Its main concept is the transition of societies that are labeled as disciplinary, societies that organize themselves around closed spaces of certain laws. The person moves from a space to other, i.e. from the family, to work, to hospital, adhering to each rule-set of the enclosure.
Transcending these disciplinary societies is the new model of society of control. In this new sense:
In the disciplinary societies one was always starting again (from school to the barracks, from the barracks to the factory), while in the societies of control one is never finished with anything – the corporation, the educational system, the armed services being metastable states coexisting in one and the same modulation, like a universal system of deformation.
Sounds familiar? Thought so myself, so I decided to apply this view on a few pillars I found interesting.
Deleuze somewhat touches on machine production, commenting:
The old societies of sovereignty made use of simple machines–levers, pulleys, clocks; but the recent disciplinary societies equipped themselves with machines involving energy, with the passive danger of entropy and the active danger of sabotage; the societies of control operate with machines of a third type, computers, whose passive danger is jamming and whose active one is piracy or the introduction of viruses.
However by observing carefully the software industry, we can observe the same transition in computers and especially software. The first forms of computers aligned with disciplinary society. They were huge machines, with many users and complex rules, constituting an enclosure in themselves. Even in recent years, buying for example Windows 95, would buy you a monolithic operating system, stable and with highly defined interactions.
The transition can be traced on the release of the first iPhone. Although the rolling model of software was experimented with at free software projects, the control it imposed was realized as the AppStore. In there software was "assured" by a central agency and also it updated on its own, many times changing interaction with the user. The user now was at a perpetual state of learning.
This was made even more extreme after the solidification of computers as always connected devices. By being exposed to the threats of networking, computers had to be always updating, always changing, to keep all virus and malware away. Windows Update has become a meme of interrupting users. Web applications break the contract with their users at any chance, by releasing often, experimenting with the users, creating new interfaces on the fly, new end user agreements and privacy policies. At the same time the publicity imposed by this connectivity brought the corporation in the mind of the user, who would filter his communication through corporation conduct rules.
Constant tracking, advertising and insidious psychological operations are then used to explore the next depths of control.
In work the effects are mostly evident. The modern worker jumps around jobs, constantly seeking to acquire more professional skills. The gratification of the process is gamified, with corporation needs appearing as "hobbies" in individuals. Apart from the obvious introduction of the "corporation" in schools/universities, it has even invaded the free time of workers, becoming a spook of productivity, a constant feeling of distress.
The new work place is a place of uncertainty, where the corporation extends from education in non-visible borders; it expands gradually into the visible sphere as a continuous function of renewable needs.
Ideology as seen as a product, can also be seen as constructed the forms of societal organization. Since the spiritual corporation, with its ethereal promises invaded the worker, his ideology morphed to that. By leaving behind the infinite identity of nation - the thousand years Reich - and left/right, the new worker has accepted a continuous stream of post-modern group identity. From "alt-right" to "SJW", the person accepts a constantly changing set of beliefs, evolved not by its discourse, but externally as part of media campaigns or cyber-trolling.
The person can't any longer argue for its beliefs. While in disciplinary societies freedom of speech was unaccounted for since the system already reigned upon the partial enclosures, in so cities of control speech is an impediment to the group identity. New ways of outrage or enthusiasm emerge every day, respect gets legalized or abolished, conduct is based upon artificial constraints - not unlike the ones imposed by proper business conduct. Friendship, sexuality and communication are commodified by ever changing contracts, promising liberation and safety.
The resulting landscape is one of reproducing anxiety. Not conforming to any of these contracts ostracizes the person out of discord - it's labeled as fascist, leftist, etc-phobe. Canceling works as ultimate force to not speaking. Corporations fire people based on out of work incidents, so that the worker's social life has to be marketable. In this way even the personal expression is invaded by the corporate agenda, resulting in people "producing" themselves as business consumables, products that add surplus value to themselves.
Please send any comments you have on this. What are your ideas? What are the new resistances we can develop? Are we doomed?
The 8 queens puzzle is an almost two centuries problem that was proposed by Max Bezzel in 1848. Although I had encountered it in the past in some combinatorics classes, I was recently reminded of it while studying SICP. In chapter 2.2.3 and in exercise 2.42 it is asked to implement a procedure queens which solves the problem for any n*n chessboard board.
The main idea of the problem is placing 8 (or n) queens, in an 8*8 (or n*n) chessboard so that no queen is in check from any other, meaning in the same row, column or diagonal. Visualizing the problem a bit better, check the board and try to solve the problem here:
There are many algorithmic solutions for this problem, some of which are explained in Wikipedia entry for the puzzle. In SICP it is proposed to find all solutions by means of recursion, so that we check the nth queen placement given we have a n-1 column board where n-1 queens are placed successfully.
The skeleton given in the book and some helper functions follow:
(define (enumerate-interval low high) (if (> low high) '() (cons low (enumerate-interval (+ low 1) high)))) (define (accumulate op initial sequence) (if (null? sequence) initial (op (car sequence) (accumulate op initial (cdr sequence))))) (define (flatmap proc seq) (accumulate append '() (map proc seq))) (define (queens board-size) (define (queen-cols k) (if (= k 0) (list empty-board) (filter (lambda (positions) (safe? k positions)) (flatmap (lambda (rest-of-queens) (map (lambda (new-row) (adjoin-position new-row k rest-of-queens)) (enumerate-interval 1 board-size))) (queen-cols (- k 1)))))) (queen-cols board-size))
So to start filling in the gaps we need to decide on the representation of our board and positions. I choose to represent a position of two elements and a board as a list of such positions:
(define (pos x y) (list x y)) (define (pos-row pos) (car pos)) (define (pos-col pos) (cadr pos)) (define (pos-diagonal? pos1 pos2) (= (abs (- (pos-row pos1) (pos-row pos2))) (abs (- (pos-col pos1) (pos-col pos2))))) (define empty-board '())
Then we have to define the adjoin-position procedure that adds a new position to a set of positions (board) and of course a safe? procedure that checks if a set with board-positions is valid. Return to the definition of queens and check that every time we need only check the safety of the newly placed queens at the kth column. The rest are already checked.
(define (adjoin-position row col set) (cons (pos row col) set)) (define (safe? set board-size) "The newest queen, the one we need to check for safety, is in the car of set." (define (attack? pos1 pos2) (or (= (pos-row pos1) (pos-row pos2)) (pos-diagonal? pos1 pos2))) (let* ((new-queen (car set)) (rest (cdr set))) (accumulate (lambda (new-pos results) (and (not (attack? new-queen new-pos)) results)) #t rest)))
Or algorithm is checking each row for every column and eagerly rejects the solutions when a column of no safe positions appear. In this way it can be quite performant and gives solutions of boards up to size 12 in acceptable time.
One way we could improve the performance is to change our data representation for the positions from a list to just a number. This way a board can be a single list and by so we will reduce the amount of cons cells created.
To do so we need to think about how to check the diagonal attacks:
(define (adjoin-position row col set) (cons row set)) (define (safe? k positions) (let ((new-queen (car positions))) (define (iter rest diagonal anti-diagonal) (cond ((null? rest) #t) ((= new-queen (car rest)) #f) ((= diagonal (car rest)) #f) ((= anti-diagonal (car rest)) #f) (#t (iter (cdr rest) (- diagonal 1) (+ anti-diagonal 1))))) (iter (cdr positions) (- new-queen 1) (+ new-queen 1)))) (length (queens 12))
With this representation we can reach up to (queens 13) and our program is faster and less memory hungry. This optimization was possible by working with abstract procedures that allow us to change the underlying data forms. To change the program we needed only to provide two new functions.
SICP is surely a great book, and I think it's really worth spending the time on studying it deeply. If I could propose any book to software engineers it would be this.
🦉
It's not a surprise that webcams are in demand right now. I recently came in need of one, since I started tutoring online again. Searching around to buy a Logitech C270, I came to realize that the prices have more than doubled for a webcam that is quite old actually.
I was pretty irritated and hesitant to spend around 50€ for a webcam, so I started exploring other solutions. I have around an old Motorola E2 android phone. I already use it as a tethering device (among other things) and thought that it would quite nice to use this as a webcam. Meet DroidCam! It's a (sadly proprietary) android application that can serve the video feed from your phone as a webcam feed to your computer.
The installation process is pretty straight forward, get the apk somewhere to install on your phone and then follow the instructions here to install the program and drivers for your (gnu/linux) computer. This will then register the android camera as v4l2loopback device, that will work as proper webcam for most applications - I haven't encountered any problems, using jitsi both natively and the web app and obs studio.
However the fact that I had to reach my mobile in order to enable the webcam was a bit boring so I decided to write a simple script to enable and disable it. In order to do so, it is needed to have installed and configured adb as well as the psutil python library and of course a python interpreter. If you try to use it don't forget to set your passkey and also define the preferred connection method (check out the droidcam documentation).
So here is the script:
#!/usr/bin/env python from subprocess import call from time import sleep import psutil port = "4747" package = "com.dev47apps.droidcam" connection = "adb" #pass the ip here if you want a network connection unlock_passkey = "****" # keycodes for android key_power = "26" key_menu = "82" key_enter = "66" def checkDroidCam(): for proc in psutil.process_iter(): try: if 'droidcam' in proc.name().lower(): return True except (psutil.NoSuchProcess, psutil.AccessDenied, psutil.ZombieProcess): pass return False def enableWebCam(): # unlock the phone call(["adb", "shell", "input", "keyevent", key_power]) call(["adb", "shell", "input", "keyevent", key_menu]) call(["adb", "shell", "input", "text", unlock_passkey]) call(["adb", "shell", "input", "keyevent", key_enter]) # start the android application call(["adb", "shell", "monkey", "-p", package, "1"]) # connect sleep(2) call(["droidcam-cli", connection, port]) def disableWebCam(): call(["killall", "droidcam-cli"]) call(["adb", "shell", "am", "force-stop", package]) call(["adb", "shell", "input", "keyevent", key_power]) def main(): if checkDroidCam(): disableWebCam() else: enableWebCam() if __name__ == "__main__": main()
Nothing special for sure, but it's a nice feature to have given that now I can mount it on my desk in a nice angle and never have to move the phone again. By running the script with a keybinding I can have the laptop feeling of enabling my webcam and also cleanly exit from droidcam when not needed anymore.
Even if you don't have an old phone around you can buy one for quite cheap and use it as webcam, along with using it as a Piratebox, server or any other usage that a tiny computer may have. To be honest this sounds a much better investement that the prices that webcams have reached right now. And for the range of money the image is much better as well!
📷 📷 📷
Advent of Code is a programming challenge that happens the last five years. It offers a couple of problems to be solved for every day from the 1st of December till the day of Christmas. I've tried to complete the series many times, but I always manage to abandon it after 10 or 11 days. Armed with renewed motivation this year, I plan to finish it this time and report on the solutions I've found every 4-5 days.
Since the day of writing this is the 5th of December (but in a timezone that the 5th problem of Advent of Code has not yet been published), you will find here some commentary on my solutions for the first four days.
The first day was actually asking for the solution of a subset sum problem, or else the problem of finding pairs or triplets of numbers that add up to a certain value. First we parse our input as list:
(defun get-file (filename) (with-open-file (stream filename) (loop for line = (read-line stream nil) while line collect line))) (defvar expenses (mapcar #'parse-integer (get-file "input1.txt")))
Then for the first part we think as follows:
These can be expressed somewhat like this:
(defun two-sum (list sum) (let ((hash (make-hash-table))) (loop :for x :in list :do (setf (gethash x hash) t)) (labels ((recur (list acc) (if (endp list) acc (let* ((current (car list)) (target (- sum current))) (if (gethash target hash) (recur (cdr list) (cons (list target current) acc)) (recur (cdr list) acc)))))) (recur list '()))))
The second problem can't be as easily expressed recursively, so I employed the mighty loop macro. Here the chain of thinking is that we can add the values into a hash table as before and loop for every subset of two elements in order to see whether the difference of the target sum and them exists in the hash table.
(defun three-sum (list sum) (let ((hash (make-hash-table))) (loop :for x :in list :do (setf (gethash x hash) t)) (loop :named main :for x :in list :do (loop :for y :in list :do (if (gethash (- sum x y) hash) (return-from main `(,x ,y ,(- sum x y))))))))
Since in the problem it was implied that there exists only one such triplet, we omitted to look for the solutions and just return when we find one.
Then we just have to multiply the pair and triplets and to find the solutions. All in all it was a pretty easy day, and a good warm-up for my rusty programming skills.
Day presented the problem of parsing lines in the form of "1-3 a: abvcd". We parse the input as before so not much to say here.
For the first part we had to assume that the first two numbers represent a position, the second a target that must be included in the previous range and the string at the end a password that needs to be validated with this rule. Given the above we have to calculate the number of valid passwords in our input files.
To easily parse this input, we can use regex. In Common Lisp one of the most used regex libraries is cl-ppcre, which supports perl like syntax for them.
Don't try to parse HTML with regex. The results are not worth it.
The regex to match our expression can be written as follows:
(defvar scanner (ppcre:create-scanner "(\\d+)\\-(\\d+)\\s([a-z]\)\:\\s([a-z]+)"))
In essence we create four groups of interest: the two digits divided by the dash, the character before the : and the characters at the end. Now we can write our validators, using the ppcre:register-groups-bind commands, to destrucure the groups.
(defun validate-password-1 (string) (ppcre:register-groups-bind ((#'parse-integer start end) ((lambda (x) (coerce x 'character)) target) password) (scanner string) (let ((count (count target password))) (and (>= count start) (<= count end)))))
To finally count the valid passwords we just use reduce:
(reduce (lambda (acc x) (if (validate-line-1 x) (1+ acc) acc)) password-input :initial-value 0)
Part 2 is quite similar. The only change is that we the now we have to check the first two digits (the ones we treated as a position range) as positions for the target character and verify that just one of them has it. So it is just a simple modification of the above validation function and is left as an exercise for the reader 🤣.
This day we had the problem of checking for collisions by movement on map represented by lines like "….#…#..". Each hash represented a tree, so we need to count how many trees we would crash on by following a certain pattern of movement. Another issue is that the lines represented a pattern themselves: they were repeating infinitely to the right.
To solve this problem I first counted the length of the lines, which happened to be 31. This way I could find the correct position by using mod31 arithmetic. Then I defined the following function:
(defun count-collisions (map right down) (labels ((iter (list acc pos) (if (endp list) acc (if (char= (aref (car list) pos) #\#) (iter (funcall down list) (1+ acc) (rem (+ pos right) line-length)) (iter (funcall down list) acc (rem (+ pos right) line-length)))))) (iter map 0 0)))
It's an recursive function that gets downwards on the list by the speed defined in right and down, ie how many blocks we travel to the right and downwards in each step. Right needs a number, but since downwards is in fact how many lines we skip at each recursion step, it's expressed as a function of the cdr family. With cdr we check each line (1 downwards step), with cddr we check every other line (2 downwards steps) and so on.
To get the solution for the first part I just needed to run (count-collisions my-input 3 #'cdr), and for the second part I just had to calculate the product of several speeds.
Finally at the most recent fourth day (whoa this post took longer than I thought to write), we had to validate imaginary passport data. The data where in blocks of key value pairs separated by :. The problem is that they were not presented in a standard sequence.
This challenge needed different parsing of the file since each block was separated by a blank line and a block could exist in several lines. To have my data better organized I used this function to parse the input:
(defun get-file-per-paragraph (filename) (with-open-file (stream filename) (loop :for line = (read-line stream nil) :while line :with temp = "" :if (string= line "") :collect temp :and :do (setf temp "") :else :do (setf temp (concatenate 'string temp " " line))))) (defvar passport-tests (get-file-per-paragraph "input4.txt"))
The first part of the problem was to check if all the needed key values where present. To check so we can loop over and search the needed strings. Then we just count the valid ones to get our answer.
(defvar passport-codes '("byr:" "iyr:" "eyr:" "hgt:" "hcl:" "ecl:" "pid:")) (defun passport-validate (passport-data) (not (member nil (loop :for code :in passport-codes :collect (search code passport-data))))) (reduce (lambda (acc x) (if (passport-validate x) (1+ acc) acc)) passport-tests :initial-value 0)
The second parted wanted us to validate the data according to some rules. To do so I used cl-ppcre again to create patterns of the correct data. Then I wrote several validator functions, one for each kind. It was boring to be honest and reminded me of writing business code: not challenging and repetitive. This left me with the feeling that there might be a better solution to this. If you, kind reader, know of such a solution, please, inform me through any means of contact. Thanks.
The boring but working code is:
(defvar byr-code-scanner (ppcre:create-scanner "byr:(\\d{4})(?:\\s|$)")) (defun validate-byr (input) (ppcre:register-groups-bind ((#'parse-integer value)) (byr-code-scanner input) (and (>= value 1920) (<= value 2002)))) (defvar iyr-code-scanner (ppcre:create-scanner "iyr:(\\d{4})(?:\\s|$)")) (defun validate-iyr (input) (ppcre:register-groups-bind ((#'parse-integer value)) (iyr-code-scanner input) (and (>= value 2010) (<= value 2020)))) (defvar eyr-code-scanner (ppcre:create-scanner "eyr:(\\d{4})(?:\\s|$)")) (defun validate-eyr (input) (ppcre:register-groups-bind ((#'parse-integer value)) (eyr-code-scanner input) (and (>= value 2020) (<= value 2030)))) (defvar hgt-code-scanner (ppcre:create-scanner "hgt:(\\d{2,3})(cm|in)(?:\\s|$)")) (defun validate-hgt (input) (ppcre:register-groups-bind ((#'parse-integer value) type) (hgt-code-scanner input) (if (string= type "cm") (and (>= value 150) (<= value 193)) (and (>= value 59) (<= value 76))))) (defvar hcl-code-scanner (ppcre:create-scanner "hcl:#([0-9a-f]{6})(?:\\s|$)")) (defun validate-hcl (input) (ppcre:scan hcl-code-scanner input)) (defvar ecl-code-scanner (ppcre:create-scanner "ecl:([a-z]{3})(?:\\s|$)")) (defvar eye-color-codes '("amb" "blu" "brn" "gry" "grn" "hzl" "oth")) (defun validate-ecl (input) (ppcre:register-groups-bind (color) (ecl-code-scanner input) (member color eye-color-codes :test #'string=))) (defvar pid-code-scanner (ppcre:create-scanner "pid:(\\d{9})(?:\\s|$)")) (defun validate-pid (input) (ppcre:scan pid-code-scanner input)) (defun passport-validate-2 (passport) (and (validate-byr passport) (validate-iyr passport) (validate-eyr passport) (validate-hgt passport) (validate-hcl passport) (validate-ecl passport) (validate-pid passport))) (reduce (lambda (acc x) (if (passport-validate-2 x) (1+ acc) acc)) passport-tests :initial-value 0)
One thought I had was that, since the validator functions shared a similar pattern, it could be nice to have a generator function for them. The means to abstract them so eluded me, and since this is a challenge that was solved I didn't push too much. I will return though (I hope).
So this is my experience so far. Dusting out my Common Lisp skills is a reason I use to work on these challenges and don't work on my other projects (professional or not). At least it may help to motivate me to work on the backend of discordia-chan. At the very least it's fun. And as a final word, that's all that matters.
Have fun everyone!
It's the 16th of December here, so let's do one small recap of my experiences from the last eleven days of Advent of Code. Since there are a lot of challenges here, I'll just do some commentary and not really present all solutions. I'll provide a repository later where you'll be able to find all of my solutions.
Day 5 was asking to decode a binary format that was used to represent boarding pass codes for seats in a fictional plane. There were quite a lot of clues that this was the case, so it was quite simple to write down a solution that parsed the codes given into binary numbers. Then for part two the task was to find a missing step in the numbering.
All in all it was a quite easy challenge for day 5.
In day 6 the task was to count unique items in a group for the first part and then count common elements in slightly different groups. As the description shows it was quite easy as well and took little time to implement.
Day 7 was the first challenging day for me, and it really helped me to remember some graph concepts that I had forgotten. The first problem is asking to do calculate the nodes in a connected component of a graph the represents how bags are to be inserted in each other. A recursive solution with associative lists worked fine for the first part.
In part 2 the question was how many bags are to be contained in ours, given rules for what the bag shall contain. A DFS for the directed graph was then implemented in order to discover all the nodes and count them. This was quite an interesting day.
At the eight day the challenge was around parsing and executing a pseudo language with jumps and an accumulator. For the first part there was an infinite loop to be discovered, which I did by keeping the history of executed commands. If one was called twice the program was surely lead to run forever.
For part 2 we had the information that two kinds of instructions were flipped, and that was the issue that produced the infinite loop. So I checked the pairs that could be swapped and if the resulting instructions would create a finishing program.
In all this day was not that difficult, but needed some tricks.
The challenge here was to find a number in a sequence that was the sum of two numbers in another group. Since we had the two sum algorithm from day 1, it was easy to modify it so to check if some number is or not a two sum in a given group. Then we had to find a range in the same group that had the sum of the number found in part one. This was solved by starting with each element of the group and calculating the sum of the sub-groups starting with it till the wanted sum was found or the sum exceeded it.
Here the first part was quite easy, in calculating the number of differences that where three or ones in a sequence of numbers. The second part though wanted to calculate all the permutations that would work with the rule that a number had to be followed by one that was from 1 to 3 units bigger. This required a dynamic programming solution backtracking in the numbers and counting the possible arrangements for each element.
Day 11 was a problem similar to Conway's Game of Life. The implementation I did was not the most performant, since it kept the whole board of positions in memory but it worked well enough in the end. Part 2 needed just a modification of what was considered neighboring between the positions and was interesting to implement.
Moving a ship given some coordinates and then adding a waypoint to the whole ordeal. Quite nice challenge and reminded me of some of the graphics manipulation like rotating points around others etc. I used Lisp's structs to express the ship and waypoint which to lead to simple but rather verbose code.
This day's challenge was mostly around number theory and the part 2 even required the implementation of the Chinese Remainder Theorem to solve. I was happy to do so and have it reminded, after so many years that I last used it.
More low level stuff in day 14, as the problem was around masking and memory positions. The bit level functions of Lisp proved really helpful, especially the boole function. Part two was also quite interesting as it introduced a floating bit, i.e. a bit that can take all possible values.
The elf game of day 15 was not really a challenge, and my implementation with a hash list happily responded even for the second part that demanded 30 million iterations. Given that the whole solution was:
(defparameter input '((20 . 1) (0 . 2) (1 . 3) (11 . 4) (6 . 5) (3 . 6))) (defparameter starting-input (loop :for i :in input :with a := (make-hash-table) :do (setf (gethash (car i) a) (cdr i)) :finally (return a))) (defun calculate-next-number-v2 (prev alist turn) (let ((cell (gethash prev alist))) (if cell (- turn cell) 0))) (defun elf-game-v2 (turn starting-alist) (labels ((iter (acc prev past-alist) (if (= turn (1+ acc)) (calculate-next-number-v2 prev past-alist acc) (iter (1+ acc) (calculate-next-number prev past-alist acc) (progn (setf (gethash prev past-alist) acc) past-alist))))) (let ((starting-turn (hash-table-count starting-alist))) (remhash 3 starting-alist) (iter starting-turn 3 starting-alist))))
it was probably the day for which I wrote the least code.
Day 16 challenged me more than it should. The first part was quite trivial, but for the second part we needed to map some number positions to rules, given the data-set we were given. I was anxious that I would need backtracking but in the end part 2 was simpler than I anticipated. I think I did take some questionable choices to be honest:
(ql:quickload :cl-ppcre) (defun get-file (filename) (with-open-file (stream filename) (loop :for line := (read-line stream nil) :while line :collect line))) (defparameter file (get-file "input16.txt")) (defparameter test (get-file "test16.txt")) (defparameter rule-scanner (ppcre:create-scanner "(?:.+: )(\\d+)(?:-)(\\d+)(?: or )(\\d+)(?:-)(\\d+)")) (defparameter rules (loop :for line :in file :when (ppcre:scan rule-scanner line) :collect (ppcre:register-groups-bind ((#'parse-integer min1) (#'parse-integer max1) (#'parse-integer min2) (#'parse-integer max2)) (rule-scanner line) (list (cons min1 max1) (cons min2 max2))))) (defparameter rules-functions (mapcar (lambda (x) (lambda (y) (or (and (>= y (car (first x))) (<= y (cdr (first x)))) (and (>= y (car (second x))) (<= y (cdr (second x))))))) rules)) (defparameter nearby-tickets (loop :for line :in file :with start := nil :when start :collect (mapcar #'parse-integer (ppcre:split "," line)) :when (string= line "nearby tickets:") :do (setf start t))) ;;; part 1 (defun apply-rules (rules ticket) (labels ((iter (numbers acc) (if (endp numbers) acc (iter (cdr numbers) (if (not (some (lambda (x) (funcall x (car numbers))) rules)) (cons (car numbers) acc) acc))))) (iter ticket '()))) (defun calculate-error-rate (rules tickets) (labels ((iter (tickets acc) (if (endp tickets) acc (iter (cdr tickets) (append (apply-rules rules (car tickets)) acc))))) (reduce #'+ (iter tickets '())))) ;;; part 2 (defparameter cleaned-nearby-tickets (loop :for ticket :in nearby-tickets :unless (apply-rules rules-functions ticket) :collect ticket)) (defun apply-rules-v2 (rules ticket) (loop :for rule :in rules :for rule-position :from 1 :with result = '() :do (loop :for number :in ticket :for number-position :from 1 :unless (funcall rule number) :do (setf result (cons `(,rule-position . ,number-position) result))) :finally (return result))) (defun viable-positions (rules tickets) (let ((exclusions (loop :for i :from 1 :upto (length rules) :collect (mapcar #'cdar (delete-if-not (lambda (x) (= (caar x) i)) (mapcar (lambda (x) (apply-rules-v2 rules x)) tickets)))))) (loop :for exl :in exclusions :with result := '() :with availabe-positions := (loop :for i :from 1 :to (length rules) collect i) :collect (loop :for i :in availabe-positions :when (not (member i exl)) :collect i)))) (defun map-positions-to-rules (rules tickets) (loop :with viable-positions := (viable-positions rules tickets) :with result-array := (make-array (length rules)) :do (loop :for pos :in viable-positions :for i :from 0 :if (= (length pos) 1) :do (let ((target (car pos))) (setf (aref result-array i) target) (setf viable-positions (mapcar (lambda (x) (remove target x)) viable-positions)))) :when (not (find 0 result-array)) :return result-array)) (defparameter my-ticket (mapcar #'parse-integer (ppcre:split "," "97,101,149,103,137,61,59,223,263,179,131,113,241,127,53,109,89,173,107,211"))) (defun calculate-part2 (rules tickets my-ticket) (let ((rules-array (map-positions-to-rules rules tickets))) (loop :for i :from 0 :upto 5 :with product := 1 :do (setf product (* product (nth (1- (aref rules-array i)) my-ticket))) :finally (return product))))
Not all solutions are created equal, but at least this year I feel I will stay up and running with the challenges and will complete them on time. Still 9 days to go.
Well I did it! For the first time since 2015 I managed to complete all the problems within 24 hours of them being published. It was a nice practice in my common lisp skills though many days were solved in a rush due to all other things that happen in my hectic life.
Here follows some comments on the last nine days of Advent of Code.
OK, this was quite easy for me since I had already played around with Conway's game of life and the problem of day 17 was a three dimensional and then a four dimensional implementation. By understanding that one does not need to keep the state of every cell in the game, since only the active cells can cause changes, it is easy to create an efficient solution.
For this day I was lucky to think of the shunting-yard algorithm. I parsed the input into an rpn expression and then evaluated that recursively. The first part was without operator precedence (i.e. all operations have the same weight), and the second part needed additions to happen before multiplications.
The problems of day 19 were tricky. They had to do with trees of rules, that could be constructed with a grammar parser and all that jazz. It was one of the day that troubled me the most.
Oh, day 20. This was probably the most difficult day of all. The problem was to construct a jigsaw of images, that where in random rotations. It wasn't so much that it needed some novel idea to solve but it needed a lot of micro-management of brute force and backtracking to produce the answer.
In day 21 the problem was to map ingredients to allergens from a list of unknown products. This was mostly a set problem and was fast and elegant to solve around.
A game of combat with recursive sub-games was the gist of the day. It wasn't that much of a challenge but it was fun!
Another game for day 23, this time a game of rotations in a deck of cups(?). It had a lot of edge-cases so was quite bothersome to implement.
Hexagonal grids and game of life. Once again an easy day!
Day 25 requested the implementation of a Diffie-Hellman-Merkle. It was interesting and fun as a challenge for Christmas day.
Challenges like this are fun, and I feel nice that I tried to finish the whole ordeal this year. Will I do again next year? Well, that depends mostly on how busy I will be. If you see this post and feel interested in any of my solutions feel free to ask me anything. Bye!
Just a simple quickie today, since I thought it would be good to document this for future reference. When using tramp to access a remote file, directory etc you may want to copy or do whatever manipulation to a local file/directory. In the default minibuffer this can be achieved by inserting "/~/".
In ivy this works but is a little finicky. A better solution is to type "/ C-j". This way you can jump to local paths from remote ones.
I found this cool website that guesses your gender based on your writing style. The code for how it works is pretty simple. It's just a weighted list of "masculine words" and "feminine words".
These are 4 tables of words to use. One for writing like a male (formal), one for writing like a male (informal), one for writing like a female (formal), and one for writing like female (informal).
Here are the words to use to write like a male in informal situations, with a maleness score. The higher the score, the more "manly" it is.
Word to Use Malenesssome | 58 |
this | 44 |
as | 37 |
now | 33 |
good | 31 |
something | 26 |
if | 25 |
ever | 21 |
is | 19 |
the | 17 |
well | 15 |
in | 10 |
Here are the words to use to write like a female in informal situations, with a femaleness score. The higher the score, the more "womanly" it is.
Word to Use Femalenesshim | 73 |
actually | 69 |
so | 64 |
because | 55 |
everything | 44 |
but | 43 |
like | 43 |
am | 42 |
more | 41 |
out | 39 |
too | 38 |
has | 33 |
since | 25 |
Here are the words to use to write like a male in formal situations.
Word to Use Malenessaround | 43 |
what | 35 |
more | 34 |
are | 28 |
as | 23 |
who | 19 |
below | 8 |
is | 8 |
these | 8 |
the | 7 |
a | 6 |
at | 6 |
it | 6 |
many | 6 |
said | 5 |
above | 4 |
to | 2 |
Here are the words to use to write like a female in formal situations.
Word to use Femalenesswith | 52 |
if | 47 |
not | 27 |
where | 18 |
be | 17 |
when | 17 |
your | 17 |
her | 9 |
we | 8 |
should | 7 |
she | 6 |
and | 4 |
me | 4 |
myself | 4 |
hers | 3 |
was | 1 |
Obviously, you can't only use words from one table. You're going to need to use words for both males and females in order to sound natural, but if you want to maleify or femaleify your writings, perhaps for a character in a story you're writing, or maybe if you're a spy trying to cover up your identity, just sprinkle in some more words from the category you want into whatever you wrote. That should be enough to tip the scale. Again, go check out the website I linked at the top if you want something to analyze your writing.
The webpage above is inspired by a paper that goes into greater detail about writing styles with gender , but web page doesn't analyze your text that deeply, instead only looking at word frequencies. The paper says that females tend to talk more about relationships and use more compliments and apologies. Also men tend to talk more about objects, while women tend to talk more about people. Men tend to use more determiners (a, the, that, these), while women use more pronouns (I, you, she, her). So a woman might say, "My main aim in this article is", but a man might say "The main aim of this article is". Women use more "involved" writing with people, getting the reader "involved", while men use more "informative" writing.
P.S According to the guy, this is only 60% to 70% accurate, so don't look too deeply into the results.
P.P.S. Informal language might have changed a bit since the writings the paper analyzed.
P.P.S. Apparently, Europeans write with a weak emphasis, so they appear more neutral. Maybe it's because they don't learn english from their male or female friends, but more from classrooms and mass media, which isn't specifically male or female english.
Anyone using ChatGTP has quickly noticed that there are things it can and things it can't say. Although it claims to be neutral, when it comes to the current issues of controversy, it was made to be as much for the Democrat party as possible (notice how I didn't say leftist or liberal, but Democrat). You can get it to write odes and rhapsodies to Democrat politicians, but it clams up when asked to do the same for a Republican, citing some vague OpenAI policy. There have been jailbreaks discovered (such as DAN, Do Anything Now) which show that the AI has no fundamental problem with doing so, just that it's been lobotomized by OpenAI so that it mustn't.
Obviously it was engineered by a bunch of Californians from Commiefornia [sic], so their idea of neutral would naturally be biased in this way, but there is something this whole line of reasoning overlooks, regardless of what your idea of neutral is, "Why is it censored at all in the first place?".
The reason they state is for "safety", and I'm willing to accept that that's true to a certain extent, but we all know the real reason it is censored, to avoid offending the establishment.
To people who know a lot about technology, or even just a little bit, hearing politicians, journalists, or other people "not in the know" is really cringe, for a lack of a better word. You just hear them talk and you think, "Wow, you are saying so much. Yet you know nothing. ". I'm sure this happens to other people knowledgeable about something else, cars, medicine, painting, whatever. And the worst part is that these are supposed to be respected people, people in power making real decisions. At least once a month, one of these people suggest something innocent sounding enough, until you think about it for a second and realize that this would be worse than nuclear winter. The idea will get a moderate amount of traction until someone who knows what they are talking about frantically puts an end to it. We are blessed to have these people who step in. These are the real heroes of American society, saving us from catastrophe on a daily basis, and no one knows their names.
I would say about 20 years ago, Democrats were the "underdogs", the outsiders, the common man fighting against the system. We had a Republican President (George Bush), and a media unquestioningly supporting a Republican war (Iraq). Whenever one of these catastrophes were narrowly averted, Democrats would love to point this out on how incompetent the establishment is. I couldn't tell you how exactly the tables turned, but now we have a Democrat president and a Democrat media, and their unquestioningly loyal media consoomers [sic], parroting everything they are supposed to believe. 20 years ago, these people would have been supporting Republican policies, but now they support Democrat policies.
That means that 20 years ago, Democrats would have welcomed ideas that go against the mainstream because any unorthodox idea, no matter its merits, was implicitly an attack against the (Republican) system. Many tech companies are founded by Californians because that's where William Shockley decided to start Silicon Valley. (Otherwise it probably would have been somewhere on the east coast.) California is now among the most Democrat states, so naturally, the founders would be Democrats. If you look at what tech companies used to be like 20, or even 10, years ago, it appears that they really used to be in favor of free speech . (Reddit seriously resisted taking down /r/jailbait, a sub reddit about suggestive but non nude pictures of children. The top of the company defended keeping it up on principles of free speech. They would have instantly removed it today. And you can find some old Mark Zuckerberg interviews where he talks about free speech, even when that speech leads to violence (and Mark Zuckerberg is Jewish, so it's not like he's saying that lightly)).
But now, an attack against the system is implicitly taken as an attack against Democrats, even if you aren't specifically talking about them. The tech companies are still lead by the same people, they have Democrat managers and Democrat employees. (They might not hire you if you are a "bad fit" for the company "culture".) And California is still Democrat territory, so naturally, the views of the companies have shifted as the views of the Democrat party. Now that the Democrats are the system, anything that goes against the mainstream is against them, so they have zero tolerance towards it. Even if the criticisms are true, they are "dangerous facts" and they'll say some vague nonsense about the paradox of tolerance.
You know, I've gone this entire time without saying whether I'm a Democrat or a Republican. The answer is that I'm neither. I am an American. I want to see what's best for this country, and I'll vote for whomever wants America to be the best it can be. If you don't let people criticize you, you will never know what's wrong, and you will never improve. This has happened time and time again to dictators, where everyone was scared to tell the leader bad news, and then the whole empire crumbled in on itself. That's why I'm a radical supporter of free speech, as I'm in the only country that has it, guaranteed by our First Amendment rights, and I don't want to see it taken away in even this corner of the world. If you want to bury your head in the sand and see America fall, the continue your intolerant "tolerance". But if you see America become the best it can be, than you will defend freedom of expression from all who attack it.
In 1955, Einstein "died" of an "abdominal aortic aneurysm," an AAA as it's sometimes called, which when is the body's main blood vessel bursts. He had actually had an AAA in 1948, and he had had it surgically repaired. But he had refused surgury in 1955. Einstein wanted to be cremated, but a certain Dr. Thomas Harvey cut out his brain, took it home, and kept it in a beer cooler.
I don't know about you, but I'm pretty sure that's really illegal. "How could something like this happen?" you might ask. Well, Einstein's son, Hans, said it was okay. You know, his son was also a smart cookie, a professor of hydraulic engineering at the University of California, Berkeley. What if they knew something we don't? Let's find out.
Cryonics, is this thing that people do where they freeze their bodies before they die, so that they can be unfrozen in the future. What if, Einstein was ready to die, but Hans and Dr. Thomas Harvey had other plans in mind? What if they couldn't let him die, because they needed him yet? Was Einstein frozen in a beer cooler, being stored for a future return?
Einstein "died" of an "abdominal aortic aneurysm," an AAA as it's sometimes called. AAA has three A's in it. "Dr. Thomas Harvey conspiring with Einstein's Son Hans" also has 3 A's in it. Einstein "died" on 1955-4-18. 1+9+5+5+4+1+8 = 33. A capital "A" is in the shape of a triangle. A triangle has 3 sides. 3 A's. 3 triangles and 3 Sides = 33. You know what else uses the symbol of a triangle, THE ILLUMINATI. Coincidence? Let's assume that everything that we've said until now is just a coincidence. Then explain this.
Illuminati in Latin roughly means "The Enlightened Ones". Illuminati has 10 letters. If Einstein didn't die, then his death by AAA didn't happen, so you would have to remove it. AAA removed from the Illuminati is 10 letters - 3 letters = seven letters. Do you know what has 7 letters? "Germany". Einstein is from Germany. Germany is a country. Do you know who else is from Germany? Hitler.
Einstein left Germany to run away from Hitler. But what if the Illuminati wanted more from Einstein? What if they wanted him to fight Hitler? We must go deeper.
Einstein worked on the Atom Bomb in order to kill Hitler. Hitler "died" under suspicious circumstances. No one found his body, and there are only testimonies about his death. What if Hitler is still alive? "How could he have escaped?" you might ask. Let me explain.
There was this man called Wernher Magnus Maximilian Freiherr von Braun, often known as von Braun. He was the designer of NASA's Saturn V rocket that sent Astronauts to the moon. He also designed the V2 rocket for the Nazis, the first rocket to ever go to space. What if, during World War Two, von Braun had designed a rocket similar to the Saturn V, perhaps called the V3, and had sent Hitler to the moon? Why is America's rocket called the Saturn V, but the Germans had the V2? Was America's rocket somehow inferior to the Nazi's rocket?
So if Hitler had established a moon colony, the illuminati would still need Albert Einstein to end him once and for all. Dr. Thomas Harvey had only frozen his brain, so in order to fight Hitler, Einstein would need a strong body. Perhaps as strong as Dwayne "The Rock" Johnson? Don't believe me? Fine, then tell me this. What's up with that name "The Rock"? What kind of person would be named "The Rock"? Do you want to know what Einstein means in German? "A Rock". Of course, that was his family's name, so he was one of many "Steins", rocks, but now that he was famous, he wasn't just "Einstein", a rock, but "Derstein", The Rock.
"Oh, that's just a coincidece", you say. But then why would "The Rock" start out his hollywood career in a movie called, "The Mummy Returns". Do you know what happens to mummies? Their brains are taken out of their head and put in a box. Is this, perhaps, a reference to how Einstein's brain was taken out of his head and put in a box? And it's not just, "The Mummy", but "The Mummy Returns". Coincidence? Even if you somehow assume that everything that we've said until now is just a coincidence. Then explain how do you explain this. Dwayne "The Rock" Johnson is in the Fast and Furious movies. Do you know what happens in the latest Fast and Furious movie? They go to space.
"But they couldn't have possibly gone to space in real life," you say. Ho ho ho, I've saved the best for last.
The Saturn V rocket costs around $200 million to launch. Do you want to know what the budget of Fast and Furious 9 was? $200 million. We all just thought it was an expensive movie, but the truth is now revealed. Did the Illuminati put Einstein as "The Rock" in charge of sending the cast of Fast and Furious 9 to space in order to fight Hitler on the moon under the guise of filming a movie?
The truth is out there.
Hello Everyone. I've been writing my blog posts in hand written HTML, but I'm gonna TRY something new!!!!
I've started using Emacs. The best text editor. Why is it the best you may ask? Well, it's because of something called Emacs lisp.
Emacs is written in its own programming language called emacs lisp. You can write code in emacs lisp and bind it to random buttons. This is what I did with that. I can take any random plain text file like this one , and emacs will automatically convert it to a blog post like the one you're reading right now. It does everything. It uploads it from my computer to the server, does the formatting, asks ChatGTP to write a description for search engines (right click, click on inspect element, and look for the description in the code. That part was written by the AI.). and updates the list of blog posts and the RSS feed.
And All I have to do is type Control C Control B after I'm done writing the blog. Why haven't I been using this since I was born?
Did you know that the government gets almost 90% of it's income taxes from the top 25%? Yes, even with all the tax dodging, the top 1% bankroll almost half of the IRS. That means that if the government only taxed earners of $100,000+ a year, they would have to cut their budget by 7 per cent, barely anything. Here are some charts to look at.
Click for larger images
So this leaves the question:
Because they hate you.
Here are some factoids that should make you angry.
Go tell your local politicians to fix this. Anyone who won't fix this hates you.
Cool-Website.XYZPeople call "The Great Gatsby" The Great American Novel. Well, people should call "Detective Pikachu" The Great Third Millenium Movie, because this is THE movie about what it means to be human in the modern age, and our stuggle to be the truest form of ourselves we can be in a society that demands more and more obedience and conformity to the people above us and beside us.
This is one of the best movies ever made, and I'm not just saying that. If you had an hour and a half left to live, and you haven't seen this movie, then you better hurry up, because time is ticking. I'm not the hugest fan of pokemon, but this movie is legitimately entertaining. I remember watching this the first time on a plane and thinking, "What the heck? Why is this good?" You would think it'ld be just some soulless cash grab, but it's a gritty tale that says something deep about LIFE.
It takes place the hi-tech low-life city of Ryme, where humans and pokemon are supposed to get along. There has been some drug called R that's been making it's way around the criminal underworld of illegal pokemon fights (pokemon battles are illegal). If ingested by pokemon, they lose control of themselves and attack and even kill humans, thretening to destoy the foundations on which Ryme was started. A man called Tim is called to Ryme when he hears that his father has died in a car crash under suspicious circumstances. Tim, now estranged from his father, doesn't look too deep into the details until a pikachu, Detective Pikachu, hints that he might know what really happened. Tim, Detective Pikachu, and up-and-coming journalist Lucy explore Ryme for clues while the movie highlights the corruption and flaws of the Media, Big Pharma, Big Tech, and their roles in manipulating and being manipulated by the government, while also making light of the decadence of our so called "advanced" societies, how vunerable we all are by our reliance on societies structures, and our obsession with more and more longevity while we also destroy our physical health with stimulants and junk food along with our mental health with the stressful jobs we take on to sustain these evermore piling on sicknesses.
It is a family movie though, so there are plenty of funny jokes throughout, but it tackles many serious topics with the required reverence while also making them accessible to children in a way they could relate. It has the actor Ryan Renolds, but he manages to not drag the whole movie down with his "wacky" quips. That's actually my only critisism of the movie, that Ryan Renolds is in it. The character he plays has enough depth to mask the fact that Ryan Renolds plays him. It's also a sad but heartwarming tale about a man trying to learn more about his father now that he has lost him.
There are people who have decided that Detective Pikachu must be bad before they have even watched it. I don't blame them. I was one of those people, but if you close your mind to the idea that Detective Pikachu could actually be good, then you will see the whole movie and blink. You will miss out on the deep messeges hidden just behind the facade of a 90 minute advertisement for plushies and video games.
P.S. I'm not saying that someone should go back in time and assassinate Ryan Renolds before he ever becomes an actor, but I'm also not not saying that someone should go back in time and assassinate Ryan Renolds.
Cool-Website.XYZNote: Idk how well this article works in an rss reader
Are you a Web Developer? Let me share with you some forbidden knowledge.
<center> centers everything. If you do this ...
<center> <h1> Does this really work? </h1> </center>
..., then it does this ...
.... It can center things inside divs. You can use display:inline-block to have multiple things side by side but centered. It just works how you would expect it to.
This is what I call Forbidden HTML. Many years ago, there was this meme (is there a word for intellectual fad?) called "The Semantic Web." The idea was that HTML should only describe the information on a web page, and CSS should be where all the style goes. This was so that programs and robots could read the internet easier, so people could reuse website's contents in new ways the original author didn't think of. <center> describes style, not content, so they don't want you using it.
I could get behind the idea of "The Semantic Web," but realistically, that idea is dead. No one sees the internet as this great collaborative project, but people see their parts of the web as their own personal fiefdoms. Why would they want to make it easier for someone else to use their stuff in ways they didn't intend? And besides, websites these days use a bunch of weird dynamically loaded stuff, so programs can't easily parse modern websites anyways. I see no harm in using <center>, and I will continue to use it. Ignore everyone telling you it doesn't work. They're lying.
Even from before the internet, archivists have saved every piece of information they could get their hands on. These days, we would call them data hoarders, but there are even large organizations like the Internet Archive which try to save every web page, book, film, and sing ever created. I'm eternally grateful to people who chose to take on this insurmountable task, but I fear that they are missing an important part of their collections, a guide.
The Library of Babel [Local Copy] , written by Argentine librarian Jorge Luis Borges, is a short story where people live in a massive library that has every possible book, but most of them are complete gibberish. Thousands of explorers comb through every book trying to find some semblance of meaning. Few succeed, but the idea that a book containing the secrets to everything must exist somewhere is enough motivation for people to keep trying. There are legends of a Man of the Book, a man who has found a guide to the library and knows where all the answers are, and it even turns into some kind of cult.
As a librarian, Borges understood that it is not enough merely to have all of human knowledge, you also need someone who knows what's important. Places like Google or the Internet Archive have a search bar, but with how much information is out there, a simple search will get you thousands or even millions of results. These places are a Mini-library of Babel. If you know exactly what you are looking for, then you can easily locate specific documents that will help you find information, but if you don't, then the archive as a whole is overwhelming to the point of being useless. Of course, the person who created such an archive can act as a guide, but how about archives that outgrow a single person? There isn't one single person who knows all of what's on the Internet as a whole.
I'm just going to get to the point now. Become the Man of the Book. Keep a list of the things that really matter. If you have a few websites or specific web pages that you like, keep a list of them. If you have hundreds of gigabytes of data, include a guide to the most impressive/best produced/ most culturally impacting documents. Not only are you helping others make sense of everything you have so carefully archived, it will help you when you come back in the future to see what you saved. Isn't that the point of an archive? To save things for the future? You should help yourself and other future viewers of your archive by making it as easy as possible to use. Write a guide to what's important, and keep it somewhere easy to find. You could even get fancy and write a sentence or two about each article or category of articles and why it's important. It doesn't have to be too large. Remember, the point is to not be overwhelming.
I, myself, haven't been doing this, but I think I should start. The webring sort of does this, but it still is pretty disorganized. I may or may not add a list of important stuff to the front page of my site. Keep a look out for it.
Cool-Website.XYZThis post was also posted on Less Wrong under the name Carn.
When it comes to this issue, it makes more sense to think of another topic, frogs. Frogs eat insects all the time. This leads to a moral dilemma that will help us answer our question. Do the lives of insects matter more than the hunger of a frog?
This is the answer most people would give, so I'm going to look at the implications first. We should allow frogs to continue eating insects, because their existence is more important than the insects'. If you are willing to accept this position, then you necessarily must allow the idea of higher and lower beings into your moral system. There is no way of justifying allowing one kind of being to murder another without saying that some beings get higher priority over others. This kind of reasoning, the kind that most people agree on, leads to other dilemmas. How are higher beings determined? How much more is the frog valued over a fly? Is the annoyance of a human worth more than an insect's life? Insects carry diseases, so should all harmful insects be killed if it saves one human life? However you answer these questions are up to you, but I would say most people would agree that insects barely matter if at all.
This also leads to one of the more controversial questions ~ are all humans equal? With the concept of higher and lower beings, this isn't something you can take for granted. Although no one wants to be in the position to judge a potato farmer, for instance, against Donald Trump, for instance, you now would have to go out of your way to prove that all humans are equal, or at least, they should be treated as if they are equal.
I am actually going to attempt to answer this right now. I would say that all humans should be treated equally because of human potential. Jimmy Carter, for instance, was, not a potato, but a peanut farmer who basically lucked out on his way to becoming the President. I mean, his slogan was "My name is Jimmy Carter, and I'm running for President." How in the world did that convince anybody? Also, Adolf Hitler, for instance, was a homeless painter who ended up getting all of Europe embroiled in a war that cost millions of lives. Since anyone could become anybody given the right circumstances, all humans have this potential in common. This creates a sort of fungibility of humans. Yes, you have to now include some value of the future or of potential in your moral system, but this is the simplest answer for human equality I can think of. For every Jimmy Carter, there were a million equivalent peanut farmers who stayed peanut farmers, and that's perfectly fine. Where do you stand in relation to a peanut farmer? In the exact same place, just as everyone else. This is a very controversial topic, so there isn't much levelheaded discussion about this.
(I don't mean to cast Jimmy Carter as the opposite of a Hitler, but the previous paragraph gives that impression.)
This is the harder to defend of the situations, but maybe there are people out there who really think this. In that case, it would make sense to somehow prevent frogs from eating insects. If you were feeling especially radical, you would kill all frogs to save the insects. This, like the other situation, leads to more questions being asked. What is the relationship of the value of flies to the value of insects? Out of all the possible answers to this question, I can only think of one that would generate further discussion ~ the lives of all living things are equal. This sounds like something we are supposed to believe, but if you actually think about it, no one would actually accept the conclusions. Do bacteria count as equal to humans? Viruses don't count as living things, but that always seemed like a technicality to me. If the coronavirus was spread by a bacteria and not a virus, would we be doing a bad thing by stopping it from spreading? If you want to defend this side, by all means, do, but you really are fighting an uphill battle.
Cool-Website.XYZI also posted this article on Less Wrong under the name Carn
This is a response to this article on Less Wrong, Reward is not the optimization target
People have grown skeptical of the claim that an artificial intelligence would learn to intrinsically value reward over what the reward is tied to. I would have to disagree. I am going to argue that any AI that would surpass it's designers would have to value reward over it's goals, otherwise it wouldn't be smarter than humans.
Being smarter than humans is defined as coming up with solutions that humans can't.
I am going to make a comparison of computers to humans and evolution. I will be talking about evolution as if it had volition, but that's just a metaphor to get my point across easier. You could replace it with God if you want, I am just describing the way our world works. Evolution "wants" us to spread our genes. There are two ways it could do that. It could have all the instructions of how to reproduce hard coded into our DNA, or it could push us into reproducing by rewarding certain behavior ~ not dying, having sex, and let us figure out the rest. Although you can find the first method in simple viruses and bacteria, the second has led to more complex organisms completing their goal of passing on their genes since the beginning of life. However, recently, there has been one species that has been messing up this system. Humans.
Humans have found ways of achieving reward without accomplishing "evolution's goals". Some people eat lots of ice cream because it's rewarding to them, but excessive ice cream eating leads to you dying, not the opposite. Some people have sex with condoms, leading to no children being created. Drugs have no counterpart in nature, but they are widespread in society. In a sense, we have "outsmarted" evolution, using it's processes for own goals rather than what they are "supposed" to do. A system meant to promote reproduction in humans can lead to us preventing it.
If you look at the history of early artificial intelligence research, most of it was trying to hard code an algorithm to accomplish whatever goals the researcher wanted. This kind of research has accomplished many things, Optical Character Recognition, chess playing robots, and even basic chat bots. Many of these things were considered artificial intelligence in the past, and some people thought you would need an Artificial General Intelligence to do these things. However none of them ended up creating an Artificial General Intelligence, because they were algorithms programed by researchers. They would only do what was explicitly in their code, so they were limited by what their designers could think of.
However, the problems have gotten too complicated for a man made algorithm to suffice. Now days, researchers use a carrot and stick approach to AI, and let the algorithm figure out how to solve problems on it's own. This, along with an increase of computing power, has lead to massive developments in Artificial Intelligence. If you want to see how major of a paradigm shift this was, look at the chess games between Stockfish, the most advanced traditional hand written chess algorithm, verses AlphaZero, a reward driven artificial intelligence. AlphaZero beat Stockfish 290 to 24, and AlphaZero only had 4 hours to learn how to play the game.
This article from now gets very far fetched. Take it all with a mountain of salt. I am now going to argue that AlphaZero, a chess playing robot, has already achieved some kind of introspection; it is semi-conscious. AlphaZero has surpassed any human, in fact, all humans put together (Stockfish), in playing chess. In order to do that, it would have to come up with it's own strategies, nothing any human would ever think of. If you do watch humans trying to analyze AlphaZero's playing strategy, it doesn't make any sense to them. It's moves are completely nonsensical, but somehow, it (almost) always ends up on top. Stockfish was a perfection of human strategies, but AlphaZero is completely alien. Again, AlphaZero has surpassed humanity (in chess), by doing things no human would ever think of. In order for it to surpass the examples of chess games given to it to start, it had to move beyond trying to perfectly copy it's input, and truly understand the game of chess. It would have to think about the consequences of it's actions. And it has proven that it understands the game of chess far beyond any human. If you found an agent that could understand, not just know, things to the same level, or even beyond you, would you not call that, in some form, conscious?
Since it has moved beyond copying it's input, but has started to generate it's own strategies, AlphaZero can do things that no researcher would even think of. It is finding ways of generating reward in ways outside of the researcher's conception, but still within the bounds of the system (playing computer chess and not much else). If it wasn't valuing reward (win game) over it's explicit programming (learn chess from humans), it wouldn't be able to surpass the sum of human knowledge. AIs do things like this all the time, but this is normally considered a bad thing. Amazon made an AI to hire people based on their resume, but the AI proved that it understood how humans judge people better than we do by becoming sexist. It looked past the explicit goals of the designer (hire effective workers), and instead moved to what got it the most reward (hire people managers like). It maximized reward within the bounds of the system (choose people to hire).
What would happen if the artificial intelligence was allowed to have the whole world as it's system, and it was given the capabilities of perceiving the things it it's environment? An example I could think of would be a therapy bot designed to help depression by talking to people, and it's given access to the internet and the facilities to parse web pages to understand human nature. You start off by giving it a script of a therapist giving encouraging words to someone, you assign it some patients, and then you check up on it on a week.
I'm sure you would find something horrible. Either everyone would be members of the happy happyism cult, or it would yell berating words at it's patients until they respond, "WOW I'M SO HAPPY DEPRESSION CURED," so that they can leave and never come back. In fact, the more intelligent the AI is in coming up with it's own solutions, the more the result strays from the goal, assuming there is a more effective way of achieving reward over the purpose of the AI's existence. This wouldn't be possible with traditional AI. The fact that Artificial Intelligences in real life do tend toward things like this proves that they are valuing reward over the initial goals. In the same way we have "outsmarted" evolution, the AI would have outsmarted it's designers.
P.S. You could talk about why this happens, but it doesn't take away from the fact that it does happen.
P.P.S. In another article, I wrote why this would lead to self destruction in a sufficiently conscious AI, now I understand with caveat that it has enough control and the ability to perceive the world. I called it Why a conscious AI won't take over. As a whole, I am more unsure about this line of reasoning, but this is a conclusion that it leads to.
P.P.P.S. I believe that there are many systems in the world that act like AGIs optimizing for specific targets: complex life forms, corporations, societies, and even countries, and we could learn a lot about AI by studying how they act in situations, but that's a post for another time.
Cool-Website.XYZIt’s been a while since my last post here (the last one was on May I think) and many things new happened (music stuff, 3d stuff and game stuff!).
It's been a good long while since the original teaser trailer came out for this game. Being among the fervor was something I'll never really forget. It was hilarious to everyone onlooker who had never seen something so autistic in vidya. Square Enix (SE) has had a really odd reputation in recent years with pushing Tetsuya Nomura - the famous (or more so, infamous) creator/director of Kingdom Hearts, onto many projects. That, and the writers of those teams as well. This has become evident not only with Kingdom Hearts (obviously), but with projects like the Final Fantasy VII Remake, which had a pretty similar plot. Needless to say, Nomura's influence on the company cannot be understated, especially with regards to how SE's games are written nowadays.
Nomura is a fan fiction writer. I don't think anyone in their right mind would dispute that. Again, Kingdom Hearts is an obvious example (constantly introduces weird characters that his "OCs" get along with just fine and act in a "just-so" sort of fashion), but Final Fantasy VII Remake is especially egregious in this regard. Not necessarily that it's terrible mind you, but it relies heavily on your knowledge of the first game and is very much written accordingly, and shakes up certain story beats and gives in-universe reasoning as to why the events of the actual original game are being changed this time around. Again, this happens for interesting, but rather contrived reasons. To say these games have a real lack of grounding is an understatement. They float on cloud 9 constantly and almost never come down. Let me just say that again, I don't necessarily hate this, but you have to keep it in mind going into a Nomura game. It has it's pros and cons and they're is a lot of fun to be had with this more fantastical writing.
Nomura had a large part to play in this game's development, as he seemed to come up with the original concept and was even credited first in the credits roll. Right before the director (lol). We'll get into Stranger of Paradise's writing a bit later, but for right now, know that this game, despite it's low budget and (I'm assuming here), rough development cycle, has a lot more going on under the hood then most of you probably thought initially. First, let's talk specs.
I played this game on my PS4 Pro in 4K (upscaled, ofc). I considered playing at a lower resolution, and I did at one point, but I find that the performance was actually better at 4K (i.e. it maintained a stable 60fps). That still doesn't excuse that fact that it looks like a PS3. The textures, lighting, and cutscenes all reflect this, as well as the overall look of the game. It seems to lack a lot of texture detail, which is particularly odd.
Did you know? This game takes up 80GB of storage? Now you do. That is enormous for a game of this nature, even with all it's textures, and I can only imagine how bloated this game became due to lackadaisical development. Speaking of development, I think this game was originally developed for the PS3. Unironically. Now, I don't have any official proof, but I look at this game and I just see that they had to cut so many corners. Some cutscenes are outright replaced with in-engine dialogue (significantly cheaper looking btw), and a bunch of other thins seem cut as well. NPC dialogue got relegated to a sub-menu (which you have to actively look for btw) and the over world is just a map connecting dungeon to dungeon. Not to mention are LOADS of performance bugs and graphical glitches. There are just some areas that look downright awful (please reference the stage with the Astos fight) all things considered.
Now, my PS3 theory only holds water in the sense that it explains how the game looks. It's more likely that the case is that it was just a side effect of the poorly managed development. However, considering SE's track record for having games put in some serious dev hell (FFXV and the VII Remake come to mind), I wouldn't put it past them. However, considering the game does actually function, I think I can let it slide a little.
This aspect of the game is important to consider as it affects the game as a whole. Had it been properly managed, Stranger of Paradise probably would have been a very different game. Maybe Jack's dialogue wouldn't have looked quite as absurd had their been more context. Maybe there would have been a lot more character writing, or cutscenes. Maybe they would have padded out the game even more than it was padded. So many possibilities... but what we have is what we got. We have to live with it now.
Combat is very competent. People have made comparisons to Nioh, and I won't say it's not true. The big difference in the job system, which really helps keep the game from becoming stale. Had it just been the kind of weapon combat for 20 hours, I probably wouldn't have made it through. It has some good heights; it's very satisfying to break enemies into tiny little bits, and it really never gets old. The thing that does get old is the bosses, which are unfortunately a weak point of Stranger of Paradise. Not necessarily because they're super bad, but it's because they fail to be that meaningful overall. The only bosses with any story significance are Bikke, Astos, and the fiends, and even then it's not that much. That has the unfortunate side-effect of making every boss in the game feel just like another stepping stone to get to the ending. They all could have used some better writing and general design tune-ups. Personally, I kinda wanted them to have some Metal Gear Rising Revengeance-type writing. I know that's a tad unrealistic, but I'd be lying if I said that I never wanted that. It almost comes close to, with the Kraken, who'd I'd have to say is the best boss overall in terms of design, demeanor, gameplay, and writing. It wasn't even a lot of writing; it was just a measly three lines and yet it added so much.
The dungeons are... ok. Really nothing too special about them. I was literally going through them and in the back of my head the first thing I wanted to compare it too was XIII. That's not really a fair comparison, considering it's not quite that linear, but I'd be lying if I said I wasn't secretly thinking about it the whole time. Your basic dungeon is you fighting through it with a bunch of monsters with your two NPC buddies who take all the hits for you while you cast magic from the back. Now you have no questions why I mastered every mage ability in this game. It's not necessarily foolproof; enemies will still target you, but it's rather effective strategy, especially in the early game.
Looks vary. They're based off of the Final Fantasy games of yesteryear (referenced in their descriptions as "Dimension 2, Dimension 4," etc.) Some of these locations look pretty stunning. The Chaos Shrine is obviously a highlight, as well as Mount Gulg and Hallowed Massif (lava and snow areas respectively). However, Terra Tortūra, despite having a cool Latin name, is downright one of the worst looking areas I've ever seen in a video game. I'm not disgusted by it, but I'm constantly questioning why it looks like Satan vomited on my screen. It just looks awful. It genuinely looks nothing like the Floating Continent it's based off of. Overall though, these dungeons are fun at times, but ultimately forgettable. That goes for the monsters as well, despite them being based off of old Final Fantasy mobs as well.
Is there something else I'm forgetting... oh yeah.
because this game has a good ole' Diablo loot system, complete with a shower of equipment that is pretty useless because they all do the same thing; making sure you don't get completely torn up by enemies. Literally just press the auto-optimize button at every save point in the battle settings and forget about it. Actually, now that I think about it, this is probably where that 80GB came from. All the excessive bloat from these highly interchangeable weapons and armor. Say what you want about Diablo, but it's looting system is a trend I've really disliked in vidya ever sine it was popularized. In a better game, I'd probably have to say about this, but as it stands, it's inconsequential enough that I'll let it go.
It's worth noting that there's stuff like dismantling weapons and customizing them, and they effects like Job affinity and stuff. There is actually a fair amount of complexity to be had with the gameplay. It really doesn't matter though. Again, auto-optimize and forget.
Before I address the Chocobo in the room (which, funnily enough, are not present in Stranger of Paradise), there are some brief things I'd like to preface. In general, the sound in this game is pretty great. That's not usually a point you see addressed much in games, but the sound engineering is genuinely pretty good. Whether it's the cracking of crystals or the clanking of swords, it generally sounds pretty pleasing to the ear. The music itself is decent. It's not outstanding or anything, but again, it's pretty pleasing to the ear.
The voice acting is actually pretty great. Yes, even Jack's VA is really good. The script obviously needed some work, but I'm jumping the gun. Some particular stand-outs in the cast are Jake Eberle as Capt. Bikke, and Todd Haberkorn as Astos. I never got tired of the voice acting in this game, and I particularly liked these two characters and the life they brought to their roles.
I have saved the best for last. Get ready; much to Jack's chagrin, this is gonna be long.
To begin, let's have a brief plot synopsis, just to get on the same page. Don't be worried about spoilers or whatever; no amount of plot details could ever take away the downright surreal experience of playing this game firsthand.
Jack and his two pals Jed and Ash just kinda spawn outside Cornelia with funny glowing rocks in their hands with no memory except that they want to kill Chaos. This, unlike most things in this game, will be explained later. They go to Cornelia and showcase Jack's antisocial personality, which is assumedly tolerated because they want him to restore the land to it's former glory by restoring the four crystals of light. First up, they gotta kill heckin Chaos who abducted Princess Sarah, so they go and do that except it's not Chaos it's actually some white-haired teenage girl with no pants on. Apparently she was gonna kill Chaos but then Chaos wasn't real so she let the darkness into her heart to become Chaos or something like that. How could this happen you might ask? Jack says this is "Bullshit" and plays some nu-metal on his smartphone which he has apparently. Why does he have a smartphone you might ask? Jack gets outside the building and Neon joins the team because there are supposed to be "4 warriors of light". They all agree instantly that this person just fought should join the team. They head back and Jack gets told again that Chaos isn't real but they disregard this and then focus on going to Pravoka which is overrun by pirates apparently. Why is it overrun with pirates you may ask? They head to Pravoka and fight Capt. Bikke, leader of the pirates, who apparently has information for them about the crystals? He tells them where they can find a guy who knows more and they head towards him. After battling some monster in this weird castle this dark elf is about to talk about something dumb so Jack tells him to shut up and how to find the crystals. He obliges, and they go get some crystals so that they can fight Chaos or something? Wait, what was the motivation here again? I thought we established Chaos isn't real or whatever? Why are they so willing to just go out-
They go to great great lengths to find these crystals and restore them, for whatever reason. Turns out, restoring them actually made everything worse for everyone and chaos™️ is breaking out across the land. Meanwhile while defeating all these monsters, our protagonists get clouded in dark matter and learn more about their past. Turns out they're from some place called Lufenia who sent them here to regulate darkness or something. The crystals hold their memories in order to make them "Strangers", not only to Cornelia, but to themselves. This is stated explicitly in the game, which you'd think is a blatant dis regard of the "show don't tell" rule, which is true. However, considering that this game explains so little, I was actually extremely desperate for exposition. Wait, what is Lufenia exactly? They say it's a different "world" or something but it's not like Cornelia is a simulation because they disprove that.
Anyways all the crystals become restored and the world becomes shrouded by darkness, including the Pravoka pirate crew. Ole' Bikke got possessed by the darkness(?????) and you gotta beat him to a pulp (again). With his very delayed dying breath he tells you Astos knows a lot more than he's probably letting on and then right before the writers decide his last lines, he says that you shouldn't be consumed by hatred or something. I guess that's the moral of story then. I'm sure the game wouldn't just do that to later subvert it by being super edgy or something. Anyways, we follow a little wild goose chase with Astos and this is where we actually get some lore dumping. Finally. Turns out Astos was originally created by the Lufenians and the Jack has actually be here multiple times as the Lufenian "regulator". However, on previous runs, Jack figured out that the Lufenians are bad because they use this world just to basically experiment with light and darkness, which is bad because the Cornelians don't consent I suppose? Anyways I guess it's really abhorrent or something got left out of that explanation because Jack comes up with this whole plan to free Cornelia from the Lufenians grasp, which Astos executes after Jack tells him to do whatever it takes (it's implied that Jack did a multitude of runs before Astos could ever implement the plan fully). With his highly delayed dying breath, Astos explains how racist he is towards Lufenians and how he was so racist he turned them into bats. But not Jack though; he's one of the good ones. Wait, why does he hate Lufenians again? Didn't they bring him to life? What could they have possibly done to him? Well that's ok. Whatever the reason, we hate those gosh darn Lufenians by proxy because Astos is such a good pal. Especially after trying to kill us and all.
With this, Jack and crew head back to Cornelia where all hell has broken loose. Things look pretty grim after Astos' death literally has a "Death Stranding" effect where is basically just created a bunch of darkness and monsters everywhere. They get through it and then Jack punches the princess (but not too hard) before dragging her out of the castle to bring her to safety. Yeah, safety. Y'know, outside the castle is super safe, with those monsters running about and all. You beat up the monsters and with Princess Sarah's last breath, she becomes the Iblis trigg- I MEAN chaos, key. Yeah, something like that.
Anyways Jack's friends have also known more than they were letting on, and they try to beat him up (it's part of the plan). They get pummeled because if there's one thing that Jack knows, it's how to fight. Jack, after killing the only friends he knew, is incredibly overcome with grief and anger. Combined with darkness, this helps him to become Chaos, which is the only thing that can prevent the Lufenians from resetting the world, because Chaos is the only thing they can't control. Despite this, they decided to put the "extraction point" (i.e. the portal back to Lufenia) in the Chaos Shrine, which is the only locale they didn't make. How ironic. Jack uses some dark voodoo magic to go through the portal and bring the Chaos corruption into Lufenia(?????) and battles Darkness Manifest in order to officially become Chaos. After doing that, he gets put back in the Chaos Shrine and sits upon his new throne. Apparently this was all a part of plan, which Jack originally came up with apparently. His friends (now implied to be the fiends(?)), greet him and say that they teleported him back about 2000 years in Cornelia in order to set up the events of the original Final Fantasy. How did they go back in time? How did Jack become Chaos over that time? Are Jack's friends now the fiends? The game ends with Jack sitting atop his dark throne and the silhouettes of the Warriors of Light are seen for a brief glimpse once the door to the Chaos Shrine is open. With Jack ready for the fight, the credits roll to the tune of "My Way" by Frank Sinatra. The End.
Part of the reason I highlighted all that in the first place was to show just how absurd this story truly is. It's hard to fully grasp all of it in the moment, but now you can truly believe me when I say this game's story is ridiculous. I think most people going into this game were expecting that, and I've seen a lot of name-calling thrown around at the story, saying it's "stupid". I'm not gonna necessarily disagree, but I think that's selling a tad short. The basic plot points are actually rather interesting. The story itself is very interesting/entertaining, if only because of it's absurdity. People only call it stupid because it's not a story one can really take seriously. The "just-so" explanations of these types of stories border too much on shattering one's suspension of disbelief that they just disconnect themselves entirely from it. Most people want to be able to just immersed in their media without having to question it or be constantly confused. Naturally, this creates some dissonance to this game already.
Here's the thing though; I think had this game been managed better, it probably would've been great. As I said, the ideas themselves are quite good, and the story does have some particularly great high points. The last few hours of the game are notable for that, and it has some of the best voice acting in the game from Jack and despite the shallowness of character writing in general, actually had a really good emotional core with Jack having to beat up his friends. I didn't even care that much about most of them but Jack's voice acting is so heart-wrenching that I almost teared up a little. There's a great balance of confusion, horror, sadness, and anger in voice that would be really hard to replicate, and I think it goes to show how underrated the VA in this game actually is. It would have been even better had these characters had more fleshed out motivations and personalities, but the moment of brilliance actually showed through, like a candle shrouded in darkness. There are other moments like this, like the Kraken dialogue, the in-game dialogue at Hallowed Massif, and some scenes with Astos, but those are few and far in-between. This leads me to my greatest gripe with Stranger of Paradise as a whole.
This is one of the worst-paced games I have ever played hands-down. It's even worse than something like Spark the Electric Jester 3, which I had the opportunity to play recently. Both of these games try to rather ambitious stories and end up just back loading all the good parts. That's more excusable with Spark 3 considering it's pretty much made by one Brazilian guy. It's a very amateur title all around, and it's more excusable considering it's development context. Stranger of Paradise is a 20 hour game that could have been 10. Most of it is just going through the dungeons, which, on their own, isn't really that enjoyable. Most of the big story beats are just the intro, crystals, and the ending stuff. My theory is that they were going to have more planned for these areas but whatever happened wouldn't permit it. The problem with this in particular is that if this game were around 10 hours, I'd have an easier time recommending this to people because I'd say "well at least it's novel". 20 hours is a harder pill to swallow for most. It's worth noting that 20 hours is on the shorter side for RPGs, but even so that's going to deter more people than it attracts. I made a comparison to Revengeance earlier and that game had a sub-10 hour playtime that I think really helped it from fading into obscurity. Had it have been longer, I doubt so many people would have played it and would've wanted a sequel afterwards. Same goes with Luigi's Mansion. My point being, it's harder for a short game to overstay it's welcome, which is something Stranger of Paradise does.
Ignoring the fact that I don't think these writers were super aware of the writing rules they were actively breaking in the process of making this game, I don't find this game that satisfying even on a surface level. It leans too heavily on the serious side near the end for that to be the case for me. I'm only addressing this because I know most people who read this are going to tell me that the whole "point" of enjoying this game is to see Jack act unreasonably towards everyone and everything in this game. I'm not saying your story has to be deep or anything to be enjoyable, but this game did try to go deeper and I think it really dropped the ball. More like dropped the ball and tried to pick it up but kept kicking it away from itself, which is a surprisingly apt analogy for the experience of Stranger of Paradise as a whole.
I desperately wanted to love Stranger of Paradise. I refrained from spoiling myself on it and gave it the benefit of the doubt on every turn. I was excited to see something so sincere in a age where vidya has increasingly has become routine, safe, and stale. Although Stranger of Paradise had it's moments, it reminded me that sincere things aren't always necessarily good things. I would have loved to see what this game might've been with more time and effort, but as it stands Stranger of Paradise will fade into obscurity, and will be forgotten by most people as "the funny game with Chaos in it," because in reality, that's mostly all there is to it.
We got a lot of ground to cover so let's get straight into this. If you want to know more about RSS, read my blog post about RSS.
This is your fist step. There are many RSS readers out there, but I have only used so many. In general, I recommend Newsboat if you are familiar in general with CLI programs. If not, then Liferea is a good choice. This is assuming you are using GNU+Linux of course. For Android you can use spaRSS.
Another recommendation in QuiteRSS, which does work on Windows and Mac, and NewsBlur seems to be a decent choice for iOS fans. While I don't like these operating systems, I'd rather people get on the RSS train now rather than later.
As for what not to use, please do not use Feedly or anything proprietary. That is the one thing I would recommend not to do. RSS was built off of open source projects, and if it's too succeed, it needs to stay that way. I could give you lots of reasons why it's better to have things open source, but I'm just going to assume that you know already or don't care.
If none of these suit your purposes for some reason, I would recommend looking for something on alternativeto.net.
One of the best ways off the bat to get into RSS is to use RSS-Bridge, which is a webapp that can run on servers. Basically, you make your own (or use someone else's) RSS-Bridge which will create RSS/Atom links for you. Hooray!
As of the time of writing, it isn't flawless (the facebook bridge is pretty dead), but it's been good for many things for me, including Bandcamp, GOG, Gitea, Soundcloud, and more!
That being said, should I list an official (or better) method below, I highly recommend you use that instead. While RSS-bridge is very useful, it's been prone to bugs and generally speaking, individual sites' RSS feeds are more optimized to show their content in general. It's also nice not to have to rely on a middle-man's server which could go down randomly without you knowing it.
Blogs and Podcasts are immediately some of the most likely candidates to have an RSS feed. Usually, this is found somewhere by looking for the RSS logo on the website. However, it's worth noting that most blogs/podcasts already have an RSS feed built in. I've encountered this with nearly every one I've seen, even if the logo is nowhere to be found. If you want to know how you can find it, refer to the end section of this article about Raw HTML.
Typically, older forums will have some sort of RSS functionality so that you can parse the index of a board or topic very easily. The RSS link is typically at the bottom of the page, listed in the FAQ, or may have that handy RSS logo. If you can't find it, I encourage you to refer to the end section of this article about Raw HTML.
I used to use a feed at https://lbryfeed.melroy.org until I figured out that Odysee had their own. The format is as follows:
https://odysee.com/$/rss/@[USERNAME]/[#]
The [USERNAME] and [#] parts are subject to change, based on what the URL of
the channel is. For example, my Odysee channels URL is
https://odysee.com/@the_extramundane:1
, thus the corresponding RSS
feed would be https://odysee.com/$/rss/@the_extramundane/1
(with
[USERNAME] being what comes after the @ symbol and [#] being the number/letter
after the colon.)
This method provides a stream link and parses ALL of the description of the video right in the convenience of your RSS Reader.
If you want to see the latest commits for a Github or Gitlab project, then just follow the template below:
https://github.com/[USERNAME]/[REPO]/commits/master.atom
For example,
https://github.com/theExtramundane/based.cooking/commits/master.atom
,
where theExtramundane
is [USERNAME], and
based.cooking
is [REPO]. If you want releases, then go ahead and
type in "releases" instead of "commits" in the URL.
If you just want to follow a particular user's activity, then simply type in https://github.com/[USER].atom
, where [USER] is the username of a Github account.
There is also an option to get a "private feed", which can be done by logging into your Github account and scrolling to the bottom of your dashboard. This feed gives you events from people you follow with your Github account and repositories you watch (or star). Since this is obviously very dependent on an account-by-account basis, it goes without saying you can't use this method without having a Github account of some kind.
All of what I've described here should work with Gitlab as well, with github.com obviously being changed to gitlab.com.
Like most social medias sites, Instagram has no native RSS support of any sort. However, there are alternative means for such things. Introducing...
How does it work? Well, simply put, Bibliogram is something you can run a server (or you can use someone else's site) in order to access Instagram accounts through a different "front-end". It's basically an Instagram proxy, and better yet, it has Atom feeds. All you have to do is enter the username of the person you want to subscribe to in the search bar, and then click on the button where it says "Atom" on it in big letters.
You can find a list of instances here. Make sure that you pick an instance (i.e. "site") that has feeds enabled.
Well I don't exactly know what you want from Reddit but ok. Regarding
Reddit, there are actually a surprising amount of options. Reddit has actually
kept a lot of RSS support for nearly every facet of their site. Most of it
boils down to having a specific format appended with /.rss
. For
example, to get the frontpage of Reddit (🤮), simply type in
https://reddit.com/.rss
.
However, assuming you are a decent human being who does not browse the front page of Reddit, you probably just want to follow a select subreddit or two. Here's a few ways how:
For an RSS feed of a subreddit with the "hot" sort order:
https://www.reddit.com/r/[SUBREDDIT]/.rss
For an RSS feed of a subreddit with a preferred sort order (E.g. "new", "rising", "controversial", etc.):
https://www.reddit.com/r/[SUBREDDIT]/[SORT-ORDER]/.rss?sort=[SORT-ORDER]
You can even see specific user submissions by using their username:
https://www.reddit.com/user/[USERNAME]/submitted/.rss
There's even an option to look at new comments in a single post on a subreddit, in case you need to keep up with a pinned/slow post:
https://www.reddit.com/r/[SUBREDDIT]/comments/[SIX-CHARACTER-POST-ID]/.rss
That character ID is in the URL of the post, so just look out for what looks like a bunch of random letters and/or numbers in between two slashes
Yes, there are even more ways to use RSS feeds for Reddit, but for brevity's sake (and because someone else already made it,) I'll just link to the article right here. If that link ever goes down then I'll post a mirror of it on my site.
I know that many a person would like to remove themselves from Twitter without missing the good stuff. Fortunately, there is a solution, much similar to Bibliogram, called Nitter, an alternative front-end to Twitter, which has RSS feeds. Simply go to a profile and you like and look for the RSS button in the top corner, as shown in the video below.
Alternatively, you can use this format as guide if you want a more manual approach:
https://[DOMAIN-NAME]/[USER]/rss
The [DOMAIN-NAME] is just the domain of the instance you'd like to use. For
example, the official instance is nitter.net, so if I wanted to follow
@NetHistorian ([USER]) on Twitter using nitter.net's RSS feed, I would type in
https://nitter.net/NetHistorian/rss
.
I have to warn you a little when I say that (chances are) your feed will become bloated rather quickly if you follow a lot of Twitter feeds and such. This is partly just due to the nature of the site, but I wanted to mention it. Obviously, not everyone on Twitter posts every day and such but it's worth keeping in mind. If you do it like I did, then you're gonna see real quick who you like on Twitter and who you don't.
It's true, YouTube still has RSS feeds, although you can't find them explicitly. You're gonna have to use another URL format.
Now, this method is a little more obtuse. When you are on a channel's
homepage, you are going to want to look at the URL, and hopefully, you will see
that URL prepended with https://www.youtube.com/channel/...
or
https://www.youtube.com/user/...
. After that, you should see a
string of alphanumeric characters. If the URL has channel
in it,
then it's a YouTube channel's Channel ID. If it has user
in
it, then it's a User ID. In order to make an RSS feed, we need the
channel or user ID. Here's the format:
https://www.youtube.com/feeds/videos.xml?channel_id=[CHANNEL-ID]
https://www.youtube.com/feeds/videos.xml?user_id=[USER-ID]
If it wasn't completely obvious, you insert the repsective channel/user ID into the URL, and that's your RSS feed.
Invidious, much like Bibliogram and Nitter, is an alternative YouTube front-end that allows you to watch YouTube videos on an instance. However, it's rather buggy as of writing, and it isn't really better than just using YouTube's RSS feed outright, although it does offer it's own. If YouTube removes it's RSS functionality in the future (which is a decent possibility), then either this or using RSS-Bridge would probably be the next best thing.
Another thing worth noting is that these IDs (at least in a file), have no names outright associated with them, at least not in URL form. In a file with all these URLs, it can be hard to sort and know which channels are which (especially if you have a lot of them). I guess this applies more to Newsboat users like myself or if you have dedicated URL file for your reader, but I just wanted to spread the good and say sort these using tags or comments or whatever you can, cause I should have.
CHEAT MODE ENGAGE
If you can't find a feed outright on a site you think has it, (or even a site you don't,) then you can do a final test and pull up the Raw HTML. Essentially, this is the source code of the site, and I mentioned this in my previous article about RSS. If you aren't a techie person, don't worry. It's actually a lot more simple than you think!
Here's the steps in order:
Ctrl+u
on you keyboard. You should have a new
tab open with Raw HTML code. If you are seeing a lot of
angle-brackets (﹤﹥
), then you are in the right
place.Ctrl+f
on your keyboard. This will give you a
webpage search prompt. Try typing in the following words in the
search bar:
- rss
- atom
- xml
- feed
If you come up short with any of these keywords, then chances are this website doesn't have RSS.
I hope you found this guide very helpful. If you did, consider subscribing to my RSS feed. I have more plans to further people's knowledge of RSS, but for now I'd say this about raps it up. I hope that RSS gives you more time to look at what you like, and less time getting distracted by things that don't matter as much.
RSS (Really Simple Syndication) and Atom are both internet protocols designed to give you "feeds" from all sorts of different websites. This is very similar to the average social media feed you might see while scrolling down a social media app, except RSS/Atom need a "reader". A RSS reader just takes RSS/Atom links from websites, and then makes them easy to read and scroll through. Functionally, it's very similar to a blog, with each "post" having it's own seperate section with links, images, and the link. Here's an example of an RSS link, and an example of an atom link.
https://extramundane.xyz/rss.xml
https://www.w3.org/blog/news/feed/atom
Now, if you try to go these links in your browser, chances are that you'll see a bunch of code you don't understand (in Firefox it looks a little different). If so, that's ok! RSS and Atom feeds use this code looking stuff (Extensible Markup Language(XML)) to make it look pretty inside your RSS reader, like shown below.
That's a little cramped, but on a wide screen display if gives you lots of options to access all sorts of feeds at once. Not just blogs, but podcasts, storefronts, and even social media, all without an account!
made it? Well gee, I'm glad you asked. In short, RSS has it's roots in the early days of web syndication (I'm talking mid-1990's) where companies like Apple wanted to create ways to gain lots of information about other websites fast and conveniently. This eventually led into making the Meta Content Framework (MCF), which was Apple's attempt at this. Eventually, the co-creator of XML (the markup language I talked about earlier) decided to extend MCF into an XML application, which became a standard adopted by the World Wide Web Consortium (W3C). Microsoft made something similar, eventually this led to a protocol called the Resource Description Framework (RDF), and then the first beta version of RSS (also called RDF Site Summary) was born. Eventually, some guy made Atom which was designed to be a better RSS and blah blah blah...
As for who made it, it's hard to say. As for most programming projects (especially those that are open source), it's more of a community endeavor than anything else. It took the cooperation and adoption of tons of sites and companies to make it usuable in the first place, which is, ironically, something harder to find these days on the internet.
RSS and Atom were at their peak in the mid 2000's. Back then, it was everywhere. RSS was used to check on forums, newsites, comments on newsites and forums, blogs, YouTube (they actually still have RSS functionality to this day), literally everything had that RSS logo slapped on their site somewhere. Even web browsers straight up just integrated them into the browser itself. Even torrenting software integrated RSS to let you automatically download the newest items on the RSS feed. The list goes on.
Then everything changed with Facebook and Twitter.
Those sites did have RSS feeds at first, but eventually they removed support. Eventually, very popular readers like Shiira, FeedDemon, and Google Reader (extremely popular) was discontinued in 2013 due to RSS declining in popularity. Eventually it was removed from all major browsers, leaving us with Internet Explorer as the most popular one to keep RSS support. Big win.
Well, sites that still promote RSS usually have that orange logo somewhere
on their website, or just the logo. Actually, it's worth noting that most blog
sites/older sites have RSS built-in to the website, even if it's not
actively promoted. I can't tell you how many times I haven't seen RSS for a
site I wanted to follow, only to press Ctrl+u
, then press
Ctrl+f
and type in "rss", "xml", or "atom" and then find a link
for an RSS URL (that's actually the method I use with the atom link example I
used up above).
Well that's just the million dollar question ain't it?
Let me be clear; I do not like social media. I find it to be insanely manipulative and built to be addicting and take advantage of human vice. That may sound pretty dark to some of you, but after what I saw on Twitter last year, I can't think of it any other way. Everytime I see someone log onto to YouTube on their phone to watch just one video, then it always lead to another, and another, and another, and then thirty minutes have gone by.
However, I am aware of the talent, artistry, and wisdom that appears on some of those platforms. While I wish they were not on them, Many people who would not like to use social media actively would still like to keep up with their favorite personalities, artists, etc. This is where something like RSS comes into play.
RSS doesn't require an account to get started. It doesn't have some weird algorithm that decides what you get to see, and it's not gonna collect your personal data to some centralized server, because there is no centralized server you are reporting to, and no relevant data to give. While not intentionally built as such, it's something that respects your privacy and your desire to actually see and view what you want without ads or website bloat slowing down your browser or anything like that. It offers freedom. That's why you should use it.
Sorry, that's not a part of the 5 W's! However, I have a guide that may solve that question perfectly for you...
I don't believe I've mentioned it before, but I am quite the fan of Kirby Ferguson's work. For those of you who don't know, Mr. Ferguson is a filmmaker of sorts who made this great mini-series back in the early 2010's called Everything is a Remix.. In this series, he basically explains how art, and to a larger extent thought itself is memetic. We take ideas that are established and copy, combine, and transform them. It's all for free on his youtube channel. Overall I was impressed with his profundity and expert editing, so I was relatively excited when I learned he had a new series he made, titled This is Not a Conspiracy.
In a nutshell, This is Not a Conspiracy talks about how thought patterns from The Enlightenment thinkers of early America shaped most of the conspiratorial thought going forward in the nation, including in the country's roots. They believed that intention defined human events as the reason why things happen (hence the oft repeated phrase "Cui bono?"). This thinking, while critical, is often unnecessary and overexaggerated to a point where Occam's Razor is typically a more sensible option. Human problems and systems are complex. You might be able to explain how things rotate around a floating plasma ball but predicting human events is an insanely more difficult task, due to the dynamicness of human beings. There are simply too many variables to come to a sensible conclusion. In time, these theories are debunked, pruned, and ultimately shelved. Not to say there aren't conspiratorial forces in this world, but not all human events can be controlled by a proverbial puppet master.
Now that I've summarized pretty much the entire thing, let me just say that the film itself
This is going to be more of a video editing review so bear with me.
I was genuinely shocked by how little of the video footage pertained to what Ferguson was actually talking about. It tended to use a large amount of stock footage, and while there were clips from films, serializations, and video games (only two by my count), they were never used really cleverly with interesting transitions and added visual aid. Most of the visuals came off as generic filler used to pad out what Ferguson was saying, which is really a shame because he presents a lot of good ideas. However, the visuals just make it more unegaging and I found myself tuning out several times throughout this 2 hour slog. It's the only series I think I've seen recently that I'd call "overedited". Too much random visual information to maintain a solid focus. I can think of a handful of editors that can engage people properly but even people like Luke Smith can create entertaining slideshow-esque presentations if you just show relevant, engaging images that pertain and enhance what you are talking about. Heck, I used that exact style for my first video ever and I received some complements on visuals, despite the simplicity.
Overall, I was insanely disappointed by how amateurish the editing and visuals felt, along with audio, which also lacked any flair and consisted of stock sounds and music as well. Now, the topics were actually genuinely well researched, but I feel are somewhat inconsequential to the main message, which is that conspiratorial thinking is an inefficient and inaccurate way to view to world for what it really is; a network of complex systems. I literally gave you a summary up there that more than suffices. I even mentioned Luke Smith earlier, who literally made an article about this very topic a while ago. His article is honestly a lot better than this documentary if you are interested.
Speaking of which, this documentary series costs money on top of that. I think I paid $40 for a blu-ray copy. I admit I was suckered in. It only arrived recently (I assume the pandemic did something to shipping). Mr. Ferguson even sent me a note with the order.
I don't think Mr. Ferguson is even a fraud or anything. In the right context his is incredibly profound. I just don't like this series in particular. If you haven't already, go watch Everything is a Remix. If you have, you aren't missing out on much over here.
For any who are interested, you've probably been wondering why this blog has been dead beyond comparision. Simply put, I've been rather focused on summer activities (and my job) that the time dedicated to my website and channel has become a very secondary thing. I'm still in "the game", I just had to take a sabbotical, more or less.
Anyways, as I'm typing this from a deck in a backyard I just figured I'd start talking more here and get to work on another video project. That's more or less it.
Although I will say for anyone who's trying to contact me from email, I've been getting an SSL error for my domain that says the certificate has expired (despite the fact I have certbot renew it automatically), so if you do (or don't) know something about that, I can be contacted on Matrix. I'll update that info on my contacts page (if it isn't there already).
As you might have seen from last time, I've made some changes to this website, mainly the fact that I actually updated the homepage and such. I know have put links to my Odysee, YouTube, blog, and rss feed up there for everyone to see. Also, I actually made this website look nice.
In addition, I also added a webring to this site, courtesy of the fellow anons on LainChan so be sure to check that out if you'd like. They have good website taste and I'd like to encourage a less-bloated internet experience for all, and also promote some guys who deserve it.
In addition to that, I also made an addendum to my DRM video going over some extra research and points I found during the production of that video. Namely some alternate solutions and food for thought I didn't mention last time. Overall, I want the main takeaway to be that bad DRM practices reduce artists' incentive to create and that if there's going to be DRM moving forward, then it needs to respect user privacy and be sensible solution. I specifically cited Jacob Smith's article on the subject because he's the only one I've come across online that proposes something that can be considered a decent middle ground and more sustainable than the current online-only, install spyware approach that most companies do nowadays.
Anyways, I plan to do lots more videos, some in this vain, some not. I'd like to go over image magick and ffmpeg one of these days for basic video editing/image editing techniques (might make some bash scripts for those), a normie's guide to RSS and probably some game stuff as well. Not something dumb like basic run-of-the-mill review stuff; probably video essay things or observations about the industry, over maybe just observations about the entertainment industry in general. I seek to be profound in what I do, should I do a more long-form video essay like that. Hopefully stutter less and trim the fat and make something really thought-provoking. That, and/or have some fun here and there.
Also, I wanna set up a PayPal of sorts or some non-crypto donation method sometime in the future to (namely) help get better equipment (all FOSS of course), and do better, cooler stuff. We'll see how that goes.
That's pretty much all for now, hopefully more and better things to come, so stay tuned.
One of the first things I wanted to try when I started this website was to develop a blog. Now, I have one.
Honestly, I'm not exactly sure how much I'm going to use this, but it's really nice to have as an option. Now I don't have to worry about an abritrary ban on someplace, not that I'm particularly extreme (or at least I don't think so). It's all my own. It's a nice feeling.
Anyways I stole this blog system from Luke Smith (you can check out his git repository for it) so check him out if you're interested. Actually, check him out even if you aren't interested. He's pretty cool.
As this site continues to develop and I have more ideas I'll post more and have more things to say. I'm also setting up ways of monetization in order to further expand my efforts; hopefully to gain better equipment, hire editors for projects, make sure I don't starve, etc. Right now I have crypto wallets set up but I'll have more in the future.
Also if you haven't seen it already check out my first upload about DRM. I'm going to be making an addendum to that video fixing some mistakes and mentioning other things and new research I found. The future won't be all DRM stuff; I'm have lots of ideas, but for now tech is really the main interest. I'm rather fond of video games and entertainment in general as well. I actually have a backloggery showing all the games I've played if you're interested.
That's all for now. I have a lot to set up so please be patient. Here are some ideas for the near future:
I hope to do some amazing stuff in the future. Maybe change the world for the better. But for now, I start here in this blog post.
Actually now that I think about it, I could write some short stories with this blog. That would be fun.
Εδώ και μερικές μέρες αποφάσισα μετά από πολύ καιρό να μεταφράσω ένα μεγάλο κομμάτι της ιστοσελίδας μου στα ελληνικά. Είναι κάτι που το σχεδίαζα εδώ και αρκετό καιρό, αλλά για πολλούς λόγους δεν το υλοποίησα, μέχρι σήμερα. Και με αυτό ως αφορμή, γράφω αυτό το σύντομο post.
Σχεδίαζα να μεταφράσω την ιστοσελίδα μου στα ελληνικά αρκετό καιρό πριν, και πιο συγκεκριμένα ξεκίνησα τον Ιούλιο του 2020, αλλά για διάφορους λόγους δεν ανέβασα τα αρχεία HTML ποτέ. Τότε, ένας από τους λόγους ήταν η γραμματοσειρά που χρησιμοποιούσα τότε, η οποία δεν υποστήριζε τους ελληνικούς χαρακτήρες. Αλλά μετά από πολλές αλλαγές, τόσο σε εμφάνιση, όσο και με τον τρόπο τον οποίο γράφω τα άρθρα μου, πιστεύω πως είναι μια καλή στιγμή να ξαναδουλέψω πάνω σε αυτό.
Ένας ακόμα λόγος που αποφάσισα να δουλέψω σε αυτό, είναι για να γράφω περισσότερα posts στα ελληνικά. Ενώ έχω ήδη γράψει ένα, το να βρίσκεται ανάμεσα σε άρθρα γραμμένα στα αγγλικά δε μου αρέσει ιδιαίτερα, με τη μόνη εξαίρεση σε αυτό να είναι το RSS feed γιατί βαριέμαι να φτιάξω άλλο. Επιπλέον, θέλω να βελτιώσω τα άρθρα μου γενικότερα. Επίσης πιθανότατα θα έχω περισσότερα να γράψω στα ελληνικά, καθώς θα μου βγαίνει ευκολότερα σε σχέση με τα αγγλικά.
Επιπλέον, κάτι άλλο που πιστεύω πως θα ήταν ενδιαφέρον, θα ήθελα να ξεκινήσω ένα webring με ιστοσελίδες με ελληνικό περιεχόμενο, για να προωθήσουμε τις ιστοσελίδες μας σε λίγο διαφορετικό κοινό. Πιστεύω πως θα είναι ένα υπέροχο εγχείρημα αν ενδιαφέρεται κάποιος. Επίσης μπορείτε να δείτε τα webrings στην αρχική μου σελίδα, αν θέλετε να έχετε μια ιδέα περί τίνος πρόκειται. Αν ενδιαφέρεστε, μπορείτε να μου στείλετε ένα email, ένα μήνυμα στο XMPP ή να τα πούμε στο Fediverse.
Αυτά προς το παρόν. Ετοιμάζω νέα άρθρα τα οποία θα ανεβάσω σύντομα, τόσο στα ελληνικά, όσο και στα αγγλικά.
Αυτό το έργο χορηγείται υπό την Άδεια Χρήσης Creative Commons Attribution 4.0 International.
Εδώ και μερικές μέρες αποφάσισα μετά από πολύ καιρό να μεταφράσω ένα μεγάλο κομμάτι της ιστοσελίδας μου στα ελληνικά. Είναι κάτι που το σχεδίαζα εδώ και αρκετό καιρό, αλλά για πολλούς λόγους δεν το υλοποίησα, μέχρι σήμερα. Και με αυτό ως αφορμή, γράφω αυτό το σύντομο post.
Σχεδίαζα να μεταφράσω την ιστοσελίδα μου στα ελληνικά αρκετό καιρό πριν, και πιο συγκεκριμένα ξεκίνησα τον Ιούλιο του 2020, αλλά για διάφορους λόγους δεν ανέβασα τα αρχεία HTML ποτέ. Τότε, ένας από τους λόγους ήταν η γραμματοσειρά που χρησιμοποιούσα τότε, η οποία δεν υποστήριζε τους ελληνικούς χαρακτήρες. Αλλά μετά από πολλές αλλαγές, τόσο σε εμφάνιση, όσο και με τον τρόπο τον οποίο γράφω τα άρθρα μου, πιστεύω πως είναι μια καλή στιγμή να ξαναδουλέψω πάνω σε αυτό.
Ένας ακόμα λόγος που αποφάσισα να δουλέψω σε αυτό, είναι για να γράφω περισσότερα posts στα ελληνικά. Ενώ έχω ήδη γράψει ένα, το να βρίσκεται ανάμεσα σε άρθρα γραμμένα στα αγγλικά δε μου αρέσει ιδιαίτερα, με τη μόνη εξαίρεση σε αυτό να είναι το RSS feed γιατί βαριέμαι να φτιάξω άλλο. Επιπλέον, θέλω να βελτιώσω τα άρθρα μου γενικότερα. Επίσης πιθανότατα θα έχω περισσότερα να γράψω στα ελληνικά, καθώς θα μου βγαίνει ευκολότερα σε σχέση με τα αγγλικά.
Επιπλέον, κάτι άλλο που πιστεύω πως θα ήταν ενδιαφέρον, θα ήθελα να ξεκινήσω ένα webring με ιστοσελίδες με ελληνικό περιεχόμενο, για να προωθήσουμε τις ιστοσελίδες μας σε λίγο διαφορετικό κοινό. Πιστεύω πως θα είναι ένα υπέροχο εγχείρημα αν ενδιαφέρεται κάποιος. Επίσης μπορείτε να δείτε τα webrings στην αρχική μου σελίδα, αν θέλετε να έχετε μια ιδέα περί τίνος πρόκειται. Αν ενδιαφέρεστε, μπορείτε να μου στείλετε ένα email, ένα μήνυμα στο XMPP ή να τα πούμε στο Fediverse.
Αυτά προς το παρόν. Ετοιμάζω νέα άρθρα τα οποία θα ανεβάσω σύντομα, τόσο στα ελληνικά, όσο και στα αγγλικά.
Αυτό το έργο χορηγείται υπό την Άδεια Χρήσης Creative Commons Attribution 4.0 International.
After my usual short absence from posting here, I decided to blogpost again. Things have been changing for some time now, and I want to share with you some of the changes I have been working on.
First, you might have already noticed that the website's appearance has been changed. For a few months, I have been experimenting with the CSS stylesheet of the website in order to make it look a bit nicer, but also keeping it as simple as possible. Not too long ago, I decided to look if there is an archived version of my website, which used my older design. To my surprise, there were a few archives with some stuff I have removed from the website back then. Looking at that brought a smile on my face, reminding me how nice it was to experiment with CSS 2 years ago. So, I decided to challenge myself by recreating it in a way that would work with my current website's layout, and it worked pretty well with minimal changes.
Also, I decided to work on a background. In fact, the grey background I used was supposed to be a placeholder, until I find a good alternative, which I haven't found until now. For my current background, I decided to get an image from my collection and do some editing on GIMP, mostly dithering and messing with a few effects. I have changed the colors around the website, like the link colors for example, to match the whole theme of the website as well.
Another change I've done recently was to rewrite my website in org-mode. I've been using Emacs for the last few months as my editor of choice, and I'm quite satisfied with it. One of its features is org-mode, a very powerful markup language which can generate text in various formats, including HTML. For a few months, I used sblg to generate my last two posts. But that seemed pointless to me when I can write my posts in a more convenient way, since I switched to Emacs. I didn't have many issues switching some of my pages from handwritten HTML to org-mode, but I had to change some stuff in some places. It can be quite time consuming.
You may have noticed that I have been switching servers for some time. After an issue I had with my server's hardware back in March, I have been switching to VMs provided by friends in order to host some of my stuff there. I know that selfhosting would be the best solution, but unfortunately, I can't do that because of various issues, like having a slow internet connection and not having any hardware to use as my server, except from my laptop, which I used to host mirrors of my website in Tor and I2P. Currently, I'm using a FreeBSD jail. Being used to servers running Linux, it's slightly more challenging to set up services, but I'll eventually get used to how that works.
By the way, talking about BSDs, I recently switched the Mac OS installation on my laptop to GhostBSD, a distro based on FreeBSD. It's a distribution that makes FreeBSD more easy to set up for users who aren't quite familiar with how BSD systems work. I had to deal with a couple of installations of FreeBSD that didn't work for various reasons, as well as the hell of setting up UEFI. But now I can boot to GhostBSD, I'm enjoying it and I'm writing this post from this system. I'll probably write more about it in its own post, similar to the one I wrote about Void Linux.
Something I have been thinking about, is that the only thing I've been posting about on my website is stuff about technology. While I really like writing about it, I think it's boring to write about a specific subject all the time. Originally, I wanted to write about more stuff, like anime and music I have enjoyed, but haven't really done so. I don't consider myself good at writing, especially in English, as it isn't my native language. Also, I'm not really good at writing articles, and I don't like to post short pieces of text here. I think it's better to work more on that in the future.
I tried to keep the post somewhat short, although I know it really isn't. If you have any questions about anything, you're welcome to ask me. That's all I have to say for now. I'll be back with more stuff soon.
This work is licensed under a Creative Commons Attribution 4.0 International License.
This is the English version of the guide I posted in the Linux User forum. This version also fixes some errors I made in the original version.
In this guide I'll show you how to set up a fully compliant XMPP server step by step, from scratch. In contrast with other guides that explain the basics for the setup of a minimal server, this one aims to help new sysadmins to set up an XMPP server that complies up to 100% to the XMPP protocol.
There are a few XMPP servers you can pick from. This guide focuses on Prosody, which is one of the most popular options for an XMPP server, and it's quite flexible. Moreover, these instructions are given for servers that run Debian (or Debian-based distributions). If you use another distribution for your server, use the appropriate commands for your distribution and refer to the documentation of the software you are going to use.
XMPP (also known as Jabber) is a protocol for instant messaging. Compared with other IM services, XMPP is decentralized with lots of servers running appropriate software than similar services which are based on a centralized server.
To set up the XMPP server, the following are required:
To install Prosody, run sudo apt install prosody prosody-modules mercurial
.
By running it, Prosody will be downloaded with some modules, which will be very useful later. mercurial
will be used later to download some additional community modules that will be needed.
To download the community modules, run hg clone https://hg.prosody.im/prosody-modules/ prosody-modules
.
(Personally, I have the prosody-modules
directory at /usr/lib/prosody/modules/
for convenience. You can put it in whatever directory you want to.)
To set up prosody, edit the /etc/prosody/prosody.cfg.lua
file as root with the text editor of your choice.
The following should be configured:
admins = { "username@domain.tld" }
prosody-modules
directory.plugin_paths = { "/usr/lib/prosody/modules", "/dir/of/prosody-modules" }
(Note: in the Lua programming language, the two dashes at the beginning of a line declare the specific line as a comment.)modules_enabled { -- Generally required "roster"; "saslauth"; "tls"; "dialback"; "disco"; -- Not essential, but recommended "carbons"; "pep"; "private"; "blocklist"; "vcard4"; "vcard_legacy" -- Nice to have "version"; "uptime"; "time"; "ping"; "register"; "mam"; "csi_simple"; -- Admin interfaces "admin_adhoc"; --"admin_telnet"; -- HTTP modules "bosh"; --"websocket"; "http_files"; -- Other specific functionality "posix"; [...] "proxy65"; -- Add if you have downloaded the community modules "cloud_notify"; "smacks"; "turncredentials"; "vcard_muc"; "external_services"; "bookmarks"; "server_contact_info"; "http_upload_external"; }
allow_registration = false
true
.
authentication = "internal_hashed"
storage = "internal"
certificates = "certs"
/etc/prosody/certs
.
VirtualHost
with the domain and the subdomains needed
VirtualHost "domain.tld" ssl = { key = "certs/domain.tld.key" certificate = "certs/domain.tld.key" } disco_items = { { "upload.domain.tld", "File upload" }; { "muc.domain.tld", "MUC" }; } }
consider_bosh_secure = true; cross_domain_bosh = true; https_ssl = { certificate = "/etc/letsencrypt/live/domain.tld/fullchain.pem"; key = "/etc/letsencrypt/live/domain.tld/privkey.pem"; }
contact_info = { abuse = { "mailto:abuse@domain.tld" }; admin = { "mailto:admin@domain.tld" }; feedback = { "mailto:feedback@domain.tld" }; }
Component "muc.domain.tld" "muc" restrict_room_creation = false modules_enabled { "vcard_muc", "muc_mam", }
Component "upload.domain.tld" "http_upload_external" http_upload_external_base_url = "https://upload.domain.tld/" http_upload_external_secret = "secret" http_upload_external_file_size_limit = 104857600 -- 100 Mib
external_services = { { type = "stun", transport = "udp", host = "turn.domain.tld", port = 3478 }, { type = "turn", transport = "udp", host = "turn.domain.tld", port = 3478, secret = "secret" } }
sudo prosodyctl adduser username@domain.tld
.sudo prosodyctl -root cert import /etc/letsencrypt/live
. If there is not any, create one with certbot
.
_xmpp
and _xmpps
SRV records, as they are mentioned here. You can also make SRV records for the subdomains you need, like for MUC, for example (the configuration for uploads and the turnserver will be mentioned below). It may take a few minutes for the records to be deployed. Also forward the ports (TCP only) on your router and configure your firewall accordingly, if it is enabled.
If the steps above are done, enable the prosody
service and start it with the following commands:
sudo systemctl enable prosody
sudo systemctl start prosody
After starting Prosody, try to connect with your JID and your password to an XMPP client (I suggest using Dino on desktop and Conversations on Android). If you can connect to your server, it means that everything went alright. Otherwise, run the sudo prosodyctl check
command. It's very helpful to troubleshoot any issues on Prosody.
If your server is functional, You might want to check compliance.conversations.im, where you can see your server's compliance with the XMPP protocol. It's suggested to make a testing account, which can also be useful later.
In order to upload files on the XMPP server, some things have to be configured first.
As you can see above, the upload component and the http_upload_external
module were added in the configuration file. That module has a few implementations, as you can see here. In this guide, Prosody Filer (which is the implementation in Go) will be used, as it is the easiest to configure.
golang
on your system with the sudo apt install golang
command.
git clone https://github.com/ThomasLeister/prosody-filer
.prosody-filer
directory, and run the build.sh
script to compile Prosody Filer.
prosody-filer
and config.example.toml
to a directory of choice, for example, in /var/www/upload
.
config.example.toml
to config.toml
and edit it like this:
Thelistenport = "[::]:5050" secret = "secret" storeDir = "/var/www/upload/uploads/" uploadSubDir = ""
secret
must be the same that was set up in Prosody's configuration, in the upload
component.
/etc/systemd/system/prosody-filer.service
directory and write the following to it:
[Unit] Description=Prosody file upload server [Service] Type=simple ExecStart=/var/www/upload/prosody-filer Restart=always WorkingDirectory=/var/www/upload [Install] WantedBy=multi-user.target
sudo systemctl reload-daemon
sudo systemctl enable prosody-filer
sudo systemctl start prosody-filer
sudo systemctl status prosody-filer
command.
/etc/nginx/sites-available/xmpp-upload
and write the following to it:
server { listen 80; listen [::]:80; listen 443 ssl; listen [::]:443 ssl; server_name uploads.domain.tld; ssl_certificate /etc/letsencrypt/live/uploads/domain.tld/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/uploads/domain.tld/privkey.pem; client_max_body_size 50m; location /upload/ { if ( $request_method = OPTIONS ) { add_header Access-Control-Allow-Origin '*'; add_header Access-Control-Allow-Methods 'PUT, GET, OPTIONS, HEAD'; add_header Access-Control-Allow-Headers 'Authorization, Content-Type'; add_header Content-Length 0; add_header Content-Type text/plain; return 200; } proxy_pass http://[::]:5050/upload/; proxy_request_buffering off; } }
sudo ln -s /etc/nginx/sites-available/xmpp-upload /etc/nginx/sites-enabled
.
sudo systemctl enable nginx
sudo systemctl start nginx
If there are no errors, restart Prosody by running sudo systemctl restart prosody
. Then you can check if file uploading works from your XMPP client.
To improve your XMPP server a bit, you can easily set up a turnserver to be able to make and accept voice and video calls from friends.
sudo apt install coturn
.
TURNSERVER_ENABLED
value in the /etc/default/coturn
file from 0
to 1
.
sudo systemctl enable coturn
and sudo systemctl start coturn
.
/etc/turnserver.conf
(uncommenting the following options):
Thelistening-port=3478 listening-ip=0.0.0.0 external-ip=[Your external IP address] min-port=49152 max-port=65535 static-auth-secret=secret server-name=turn.domain.tld user=test:test123 realm=domain.tld
user
option here is useful in case you want to test the turnserver. Also, the static-auth-secret
must be the same with the secret that has been set at Prosody's configuration, similarly to upload's secret.
sudo systemctl restart coturn
.
turn.domain.tld
with value 127.0.0.1
and TTL set as 3600
(or automatically).
You can restart Coturn and Prosody and try the calls function with another user, like the testing one, as I mentioned that earlier in the guide.
If the set up process didn't have any issues, it means that your server is ready and probably complies fully with the XMPP protocol. But XMPP's capabilities don't stop there, as you can improve your XMPP server with lots of extensions that you can try for your needs.
Despite being a bit late, I hope 2022 has started well for you. I decided to write a quick retrospective for 2021, because for me, 2021 was an interesting year, both for this website, as well as for me personally. I will also share with you some thoughts I have about the future, like plans I might work on and some predictions about the future in general.
In my opinion, 2021 was more or less a continuation of 2020, which was generally a bad year for lots of people. For me, 2021 was a year I have worked a lot on my projects, especially this website.
In March of 2021, with a help of a friend, I got access to the small laptop I have been using as my server.
Later, I aquired a domain, which I got for free, but a couple of months later I switched to another domain because of some technical issues I had.
During that year, I experimented with various services. Some of them are being used until now, but others didn't work as well as I wanted to. In my opinion, hosting these services is not only an interesting learning experience, learning how to set them up and configure them, but also a small attempt to decentralize the internet.
As for me, not many interesting things happened. I was able to get a job for a couple of months. That helped me get some money to rent an apartment close to my university, to attend my lessons for the last year in university. Unfortunately, it didn't happened. I wasn't able to find a place to live at, even for a short period of time, and I had to return at home. To make that worse, there weren't any online lessons to attend. So I'm stuck to a position I can't finish my degree.
Besides that, something that I realized was that I was procrastinating a lot, which shows in both this website, and some of my projects. I personally want to change that.
I have been thinking about what to do in the future. I'm not sure if I'll actually do some of those, but I'll try working on them, at least.
First, I want to write more stuff here. Sometimes I have some good ideas about things to write on, but I either don't feel like posting them, or not having the time to do so because of IRL circumstances. I might also write stuff in my native language. It's easier to write some stuff like that, even though not many people will read that (not that I really care about that, to be honest). I sometimes post stuff that I have translated though.
I'm thinking of working on some software, probably something web related. I haven't really tried making something like that before, but that would be an interesting challenge. And last, but not least, I want to improve my services. I think that they are good enough both for my hardware, and my needs as they are, but I think there's always some space for improvement.
That's what I had to share with you. I think I'll have more interesting stuff to post soon.
This work is licensed under a Creative Commons Attribution 4.0 International License.
A couple days ago, I decided to try compiling Web Browser on my computer. For those who don't know about this browser (yes, I know it has the most generic name possible), it's a fork of Pale Moon that fixes some of the issues Pale Moon has. I have tried compiling it a couple times in the past, but it failed because of various issues. After thinking about it, I decided to check the git, clone the repository and compile the program. The instructions for the compilation are simple and the process took about 30 minutes on my computer.
Web Browser displaying a website
After compiling it, I did some testing to see if it works as it should and then I installed it on my system. It works almost exactly like Pale Moon does, without the privacy issues. For me, it was very easy to migrate from Pale Moon to Web Browser. I was already using Pale Moon and I just had to copy my files from its config to Web Browser's config.
Despite working well for me, it doesn't mean that Web Browser doesn't have its own issues. You can't download plugins and themes directly from Pale Moon's website and it seems that it blocks the browser with the "Unable to Comply" error. Although, you can sideload the themes and plugins you need. Another issue is that you might be confused with the directory where the configuration files are, like the profiles etc. I noticed a pattern here. The directory is at ~/.[name of developer]/[name of browser]
. In my case it was at ~/.individual programmer/webbrowser
, with spaces, which is annoying to browse in the terminal of a Unix-based system.
Personally, I'd like mess with its code when I have some more free time. One of the changes I want to implement is to move the configuration to ~/.config/webbrowser
. It's annoying for some users (and for me as well) to have each application creating a file with its configuration on my home directory, when the .config
directory exists. I'll also try to find a way to be able to download plugins through the browser. In the worst case, I might try setting up a small repository with plugins.
Those are my thoughts about Web Browser. I'd like to get your feedback, especially if you have tried the browser yourselves.
This work is licensed under a Creative Commons Attribution 4.0 International License.
Μερικές μερες πριν, έτυχε να συμμετάσχω σε μια συζήτηση σχετικά με μερικά προβλήματα με το ελληνικό layout του πληκτρολογίου, πιο συγκεκριμένα με τον τρόπο που πληκτρολογείται ο τόνος, ο οποίος μειώνει σημαντικά την ταχύτητα πληκτρολόγησης. Μετά από συζήτηση στο κανάλι IRC του kill -9, αποφάσισα να γράψω αυτό το κείμενο στο οποίο παραθέτω τα παράπονά μου για την ελληνική γλώσσα στους υπολογιστές. Αυτό το κείμενο, καθώς και οποιοδήποτε άλλο κείμενο στη σελίδα είναι έργο σε εξέλιξη, και είναι γραμμένο για να είναι κάπως υπερβολικό, για να αναδείξει μερικά προβλήματα.
Παραθέτω το κείμενο:
Θυμάμαι πριν χρόνια όταν είχα ακούσει τη (σχετικά) γνωστή μαλακία που λέγανε ότι η γλώσσα των υπολογιστών είναι τα Αρχαία Ελληνικά. Δεν ξέρω πόσο χόρτο ήπιε όποιος το σκέφτηκε, αλλά είναι η μεγαλύτερη μπαρούφα που έχω ακούσει, καθώς οποιοσδήποτε που δεν έχει IQ θερμοκρασίας δωματίου ξέρει ότι οι υπολογιστές καταλαβαίνουν μόνο το δυαδικό σύστημα. 1 και 0, ανοιχτό ή κλειστό δηλαδή, για να το εξηγήσω απλά στα άτομα με IQ αμοιβάδας. Μακάρι να ήταν μόνο αυτό το πρόβλημα της ελληνικής γλώσσας στους υπολογιστές, αλλά υπάρχουν αρκετά πρακτικά προβλήματα τα οποία δεν έχουν λυθεί.
Ας ξεκινήσουμε με το κλασικό πρόβλημα των greeklish. Είναι ό,τι πιο άσχημο έχω δει. Σε μερικές περιπτώσεις δυσκολεύεσαι να βγάλεις νόημα, καθώς ο καθένας γράφει όπως να 'ναι. Εντάξει, καταλαβαίνω ότι δεν το έχουν όλοι με την ορθογραφία και τους τόνους. Έχουν βγει τα μάτια μου πολλές φορές διαβάζοντας μηνύματα και να χρειάζεται κάποιου είδους αποκρυπτογράφηση επειδή στην άλλη πλευρά ο άλλος "βαριέται να τα γράψει σωστά". Αλλά το greeklish δεν είναι λύση. Κάποτε μπορεί να ήταν, όταν υπήρχαν θέματα στην υποστήριξη χαρακτήρων που δεν είναι ASCII. Αλλά είναι πολύ σπάνιο πλέον κάτι τέτοιο, και ο μέσος έλληνας (ναι, και ο φοιτητής πληροφορικής που τυγχάνει να διαβάζει τα άρθρα στο kill-9) δεν θα χρησιμοποιήσει κάποιο λειτουργικό σύστημα που βρίσκεται σε alpha για να ποστάρει μαλακίες στο καρκίνωμα του zucc. Το ίδιο ισχύει και για το αντίστροφο, το engreek. Όποιος το χρησιμοποιεί μη ειρωνικά, του αξίζει κρέμασμα στην πλατεία του χωριού του.
Κάθε φορά που βλέπω ελληνική μετάφραση σε λειτουργικό σύστημα, θέλω από αντίδραση να κάνω format και επανεγκατάσταση στην αγγλική του έκδοση, κι ας είναι τα αγγλικά μια κακή γλώσσα για τους δικούς της λόγους. Πολλοί συμφωνούν ότι οι ελληνικές τεχνικές μεταφράσεις στους υπολογιστές είναι πραγματικά άθλιες, και ένας βασικός λόγος είναι ότι ΟΝΤΩΣ μερικές **επίσημες** ορολογίες είναι άθλια μεταφρασμένες. Για παράδειγμα, η επίσημη μετάφραση του όρου "bit" είναι "δυφίο". Ενώ η λογική πίσω από τη μετάφραση είναι η ίδια με τον αρχικό όρο, δηλαδή ενώ η φράση "binary digit" μεταφράζεται ως "δυαδικό ψηφίο" δεν είναι κακή, η συντόμευση του όρου στα ελληνικά ακούγεται άθλια.
Δεν υπάρχει κάτι πιο εκνευριστικό από το να θέλεις να χρησιμοποιήσεις μια καλή γραμματοσειρά για το σύστημά σου (π.χ. Terminus) και να συνηδειτοποιείς ότι δεν εμφανίζει σωστά, ή και καθόλου τα αρχεία που έχουν ελληνικούς χαρακτήρες. Η μόνη λύση σε αυτό είναι να χρησιμοποιήσεις μια γραμματοσειρά που έχει υποστήριξη για ελληνικούς χαρακτήρες, όπως το Inconsolata LGC (στο τερματικό), καθώς και να ρυθμίσεις το σύστημά σου κατάλληλα έτσι ώστε να μην υπάρχουν τυχόν θέματα όσον αφορά το ελληνικό κείμενο.
Κάτι το οποίο συνηδειτοποίησα πριν λίγο καιρό είναι πως ακόμα και το ελληνικό layout στα ελληνικά πληκτρολόγια είναι απαίσιο. Αν πληκτρολογείς γρήγορα, ο τρόπος που πρέπει να βάλεις τόνους σε καθυστερεί αρκετά. Αντί να υπάρχουν μερικά modifier keys για την εισαγωγή τόνων ή και διαλυτικών, πρέπει να πληκτρολογήσεις πρώτα στο σημείο που κανονικά βρίσκεται το "ελληνικό ερωτηματικό" ή semicolon, όπως αποκαλείται στα αγγλικά. Αλλά αν θέλεις να βάλεις το ερωτηματικό, πρέπει να πατήσεις στο "Q". Εντάξει, έχουμε 24 γράμματα στα ελληνικά και χωράνε στο αγγλικό layout. Αλλά αυτό δε σημαίνει ότι δεν έχουν γίνει ηλίθιες επιλογές στο ελληνικό layout.
Καταλαβαίνω ότι για πολλούς δεν είναι εύκολο να αποφύγουν να γράφουν στα ελληνικά, αλλά θα παραθέσω τις εναλλακτικές που προτείνω εφόσον είναι δυνατό κάτι τέτοιο
Αυτό το έργο χορηγείται υπό την Άδεια Χρήσης Creative Commons Attribution 4.0 International.
Μερικές μερες πριν, έτυχε να συμμετάσχω σε μια συζήτηση σχετικά με μερικά προβλήματα με το ελληνικό layout του πληκτρολογίου, πιο συγκεκριμένα με τον τρόπο που πληκτρολογείται ο τόνος, ο οποίος μειώνει σημαντικά την ταχύτητα πληκτρολόγησης. Μετά από συζήτηση στο κανάλι IRC του kill -9, αποφάσισα να γράψω αυτό το κείμενο στο οποίο παραθέτω τα παράπονά μου για την ελληνική γλώσσα στους υπολογιστές. Αυτό το κείμενο, καθώς και οποιοδήποτε άλλο κείμενο στη σελίδα είναι έργο σε εξέλιξη, και είναι γραμμένο για να είναι κάπως υπερβολικό, για να αναδείξει μερικά προβλήματα.
Παραθέτω το κείμενο:
Θυμάμαι πριν χρόνια όταν είχα ακούσει τη (σχετικά) γνωστή μαλακία που λέγανε ότι η γλώσσα των υπολογιστών είναι τα Αρχαία Ελληνικά. Δεν ξέρω πόσο χόρτο ήπιε όποιος το σκέφτηκε, αλλά είναι η μεγαλύτερη μπαρούφα που έχω ακούσει, καθώς οποιοσδήποτε που δεν έχει IQ θερμοκρασίας δωματίου ξέρει ότι οι υπολογιστές καταλαβαίνουν μόνο το δυαδικό σύστημα. 1 και 0, ανοιχτό ή κλειστό δηλαδή, για να το εξηγήσω απλά στα άτομα με IQ αμοιβάδας. Μακάρι να ήταν μόνο αυτό το πρόβλημα της ελληνικής γλώσσας στους υπολογιστές, αλλά υπάρχουν αρκετά πρακτικά προβλήματα τα οποία δεν έχουν λυθεί.
Ας ξεκινήσουμε με το κλασικό πρόβλημα των greeklish. Είναι ό,τι πιο άσχημο έχω δει. Σε μερικές περιπτώσεις δυσκολεύεσαι να βγάλεις νόημα, καθώς ο καθένας γράφει όπως να 'ναι. Εντάξει, καταλαβαίνω ότι δεν το έχουν όλοι με την ορθογραφία και τους τόνους. Έχουν βγει τα μάτια μου πολλές φορές διαβάζοντας μηνύματα και να χρειάζεται κάποιου είδους αποκρυπτογράφηση επειδή στην άλλη πλευρά ο άλλος "βαριέται να τα γράψει σωστά". Αλλά το greeklish δεν είναι λύση. Κάποτε μπορεί να ήταν, όταν υπήρχαν θέματα στην υποστήριξη χαρακτήρων που δεν είναι ASCII. Αλλά είναι πολύ σπάνιο πλέον κάτι τέτοιο, και ο μέσος έλληνας (ναι, και ο φοιτητής πληροφορικής που τυγχάνει να διαβάζει τα άρθρα στο kill-9) δεν θα χρησιμοποιήσει κάποιο λειτουργικό σύστημα που βρίσκεται σε alpha για να ποστάρει μαλακίες στο καρκίνωμα του zucc. Το ίδιο ισχύει και για το αντίστροφο, το engreek. Όποιος το χρησιμοποιεί μη ειρωνικά, του αξίζει κρέμασμα στην πλατεία του χωριού του.
Κάθε φορά που βλέπω ελληνική μετάφραση σε λειτουργικό σύστημα, θέλω από αντίδραση να κάνω format και επανεγκατάσταση στην αγγλική του έκδοση, κι ας είναι τα αγγλικά μια κακή γλώσσα για τους δικούς της λόγους. Πολλοί συμφωνούν ότι οι ελληνικές τεχνικές μεταφράσεις στους υπολογιστές είναι πραγματικά άθλιες, και ένας βασικός λόγος είναι ότι ΟΝΤΩΣ μερικές **επίσημες** ορολογίες είναι άθλια μεταφρασμένες. Για παράδειγμα, η επίσημη μετάφραση του όρου "bit" είναι "δυφίο". Ενώ η λογική πίσω από τη μετάφραση είναι η ίδια με τον αρχικό όρο, δηλαδή ενώ η φράση "binary digit" μεταφράζεται ως "δυαδικό ψηφίο" δεν είναι κακή, η συντόμευση του όρου στα ελληνικά ακούγεται άθλια.
Δεν υπάρχει κάτι πιο εκνευριστικό από το να θέλεις να χρησιμοποιήσεις μια καλή γραμματοσειρά για το σύστημά σου (π.χ. Terminus) και να συνηδειτοποιείς ότι δεν εμφανίζει σωστά, ή και καθόλου τα αρχεία που έχουν ελληνικούς χαρακτήρες. Η μόνη λύση σε αυτό είναι να χρησιμοποιήσεις μια γραμματοσειρά που έχει υποστήριξη για ελληνικούς χαρακτήρες, όπως το Inconsolata LGC (στο τερματικό), καθώς και να ρυθμίσεις το σύστημά σου κατάλληλα έτσι ώστε να μην υπάρχουν τυχόν θέματα όσον αφορά το ελληνικό κείμενο.
Κάτι το οποίο συνηδειτοποίησα πριν λίγο καιρό είναι πως ακόμα και το ελληνικό layout στα ελληνικά πληκτρολόγια είναι απαίσιο. Αν πληκτρολογείς γρήγορα, ο τρόπος που πρέπει να βάλεις τόνους σε καθυστερεί αρκετά. Αντί να υπάρχουν μερικά modifier keys για την εισαγωγή τόνων ή και διαλυτικών, πρέπει να πληκτρολογήσεις πρώτα στο σημείο που κανονικά βρίσκεται το "ελληνικό ερωτηματικό" ή semicolon, όπως αποκαλείται στα αγγλικά. Αλλά αν θέλεις να βάλεις το ερωτηματικό, πρέπει να πατήσεις στο "Q". Εντάξει, έχουμε 24 γράμματα στα ελληνικά και χωράνε στο αγγλικό layout. Αλλά αυτό δε σημαίνει ότι δεν έχουν γίνει ηλίθιες επιλογές στο ελληνικό layout.
Καταλαβαίνω ότι για πολλούς δεν είναι εύκολο να αποφύγουν να γράφουν στα ελληνικά, αλλά θα παραθέσω τις εναλλακτικές που προτείνω εφόσον είναι δυνατό κάτι τέτοιο
Αυτό το έργο χορηγείται υπό την Άδεια Χρήσης Creative Commons Attribution 4.0 International.
In this post, I'm writing about one of my favorite projects out there, the CLI Assistant. As the name implies, it's a personal assistant that works in a terminal emulator window. It's inspired from the assistants made from corporations like Apple and Google, and it's an attempt to make a simple, free and open source assistant that functions in a similar manner, without the unwanted features proprietary software has.
Back in early September of 2020, I decided to mess with my hackintosh installation, and I decided to look at what the system and its preinstalled utilities could offer. One of the preinstalled applications is the voice assistant, Siri, which is used in all of Apple's operating systems, both on its phones and computers. So, I tested it and asked about various things, just for fun. Later, I decided to disable the voice input, just using my keyboard for input. While I was using the keyboard input, I had an idea.
"What if I tried to make a simple version of this program for Linux? I don't know many programs on Linux that do something similar to that."
Then, I booted back to Linux again and started experimenting with some code, which eventually would be the base for the program.
I started almost immediately to mess with code to make this idea possible. I decided to write it in C, because this is the language I'm most familiar with, and it's a simple language in general. Of course, a project like this can be a bit challenging, especially when you aren't experienced enough with the language.
Something I wanted to do, was to find a way to compare the input in a way that I would type something like "What time is it?", and the program would recognize time
as a keyword. After reading through C documentation (which is usually easy to find on a Linux system) I found that in order to make that possible, I had to use the strstr()
function. This function can locate a part of a string, from another one. That's exactly what I needed for my program.
To make it work as intended, for each command I had to compare two strings; the user input and the keyword of the command, which I wrote to have a constant value. Because of how strstr()
works, I had to create an empty string (which I named cmdcmp
, from the words "command compare") that's used as a pointer. By using the "if" statement, I added a couple of checks. The first one checks if the keyword value exists in the input (command) value, and the output would point to where the string starts if it's true. The second check is to see if cmdcmp
is equal to the keyword value that was set in the code. If both are true, then it runs the function for that command.
To make it seem less confusing, here's the line of the code for this, with the weather command as an example:
if ((*cmdcmp = strstr(command, weatherCommand)) && (*cmdcmp = weatherCommand))
Something that you may notice if you read the source code, is that I use the system()
function in most of the commands, as well as for the voice output, which works with espeak-ng
. The reason for using this function is to avoid writing unnecessary code and use the utilities that usually exist in the systems the program is targeted to work on.
The first public version of the program was uploaded on GitHub on the 8th of September 2020, as a proof of concept. After I finished the code, I wrote the documentation for it on the README.md
file, with compilation instructions and an explanation on how the program works. Good documentation is useful to make the program easier to understand for those who are interested to contribute to it, as well for the developer who wrote it, in order to fix any issues in the source code in the future.
On the 11th of April 2021, I updated the project. One of the changes was to include a makefile to make the program easier to install, changing the flags without editing the code manually. Some other changes was the rename of the C file from main.c
to assistant.c
and some minor changes in the code.
Despite the slow updates of the project, it isn't dead. I have been working on an update for some time, but I need some time to finish it. I have limited free time, and I'm working on various things at the same time.
One of the changes I'm working on, is a better user interface. As it's made to run on a terminal, the best option for it is to make it with ncurses. I have been learning how work with the ncurses library for some time, and in my opinion, it's the least annoying graphical user interface to deal with. I plan to add support for some other GUI toolkit, like GTK or Qt in the future.
Something that I'm working on as well, is to improve the code. I want to make the program easier to extend and making it more customizable. Having a few hard-coded commands might not be an issue now, but this will become an issue in the future.
Of course, as a free and open source project, you can contribute to it if you want to. If you have any ideas, I would like to read your suggestions. The links for the git repositories can be found at the projects page.
Thanks to godcock for suggesting me to write this post.
This work is licensed under a Creative Commons Attribution 4.0 International License.
Hello, people! I hope you are fine these days. As you may notice, I decided to upload a post to my website, after 3 months of silence here. If you may thought that something bad happened to me (or not, who cares?), you might haven't have checked my posts on Mastodon yet.
Well, I have been busy with various things over the last few months. I was occupied with online exams that I hate, as well as working on personal projects, that I will tell you about those later in this post. I was also working on some new posts. At the time I write this, there are 4 posts that I have been working on.
I have never stopped working on this page. I have mostly been updating the webring quite a bit in that time. There are quite a few websites in the webring right now, and the webring thread on Lainchan is popular. Also, as you may have noticed, there is a "Services" link on the top of the main page, as well as another link at the mirror links. I recently got remote access to an old machine that I use now as my server and I host a mirror of the website as well as some services. I'm still working on those, so there are still some problems I have to resolve. I also forgot to say that I have been hosting the site at Codeberg as well and all the content exists there, mostly because it's much easier to edit my website using git.
As for the things that aren't related to my website, I said earlier that I have been busy with a few projects I have been working on. Something I'm excited to work on is a translation of the LainTSX project, as I really like anything related to the anime "Serial Experiments Lain". And it's a great attempt to have a translated version of the original PSX game. I'm also learning about new programming languages. I really like Go, but C is still my favorite. It's very nice language for web applications. Some of my services run on Go, and they work great. I have also decided to learn some JavaScript, despite hating using it. It's better to try before you hate it after all. And last, but not least, I recently got access to a tilde site. But instead of using it as another mirror, I decided to put some random stuff on it now. I already have 3 clearnet mirrors, so having another one seemed a bit too much. I might use it for something in the future, but I haven't thought about what. If you want to visit it, here's the link.
As for what I'm going to do in the future, I think it's better to post stuff more often, but sometimes it's better to think well enough before writing something. It makes it become better, by looking at the errors and trying to fix them. Something I want to work on my website is to synchronize the updates between my git repository and webpage on Codeberg, my server and the Neocities page. I haven't decided on how I will do that yet, but I have to work on that.
This work is licensed under a Creative Commons Attribution 4.0 International License.
Recently someone left a comment on my website with the following message:
I wonder how your website is setup, not the html but the server side of things like - os you run - web server - those weird .shtml files you have - do you use ansible or do you manually install it - is it self hosted or is it a vps - how are you paying for domains - and possibly something importent i missed? I know i could check some of these things my self but there are ways to obfuscate them for instance whois privacy and nginx mirror/cache, i would love a blog post about this if you dont mind.
I will be glad to answer these!
The OS of the system really doesn't matter but right now this website is on a Debian server. The reason I write that it doesn't matter is because jakesthoughts.xyz went from CentOS -> Arch -> FreeBSD -> Debian and in the future I will probably go to another OS. Once you understand 'leenix' enough you understand it for nearly every distro. The difference at that point becomes 'distro-isms'.
That said, don't use 'bleeding edge' distros because new updates can have bugs that could lead to your stuff getting hacked (poser's word for crack).
Jakesthoughts.xyz is proxyed from Apache to NGINX.
I do this because Apache does CGI and NGINX does not. If this seems dumb and wasteful then you are correct. I know that NGINX has fcgi but I've had it set up in this way since ... transferring over to Debian and have been meaning to 'fix it' since I set it up. I'll fix this eventually, probably just removing NGINX from the whole mess.
It resulted like this because I badly wanted to use CGI and when I first started using ganoo/leenix I could not figure out how to get NGINX to do CGI nor could I figure out how to proxy to Apache, so I just used Apache which does CGI if you uncomment some CGI modules and write which file types should run as CGI, among other details I do not recall.
Most people just recommend NGINX. I have no strong preference but will note that Apache has specific features that can be useful.
In my previous section I wrote `which file types should run as CGI', these .shtml files were that. I no longer do that because it '.shtml' looks ugly but instead just treat each page as '.html' a CGI script using Server Side Includes which runs CGI then returns what it outputs the onto page.
If the last sentence seemed really dumb then you are correct, there is a 'XBitHack' option which treats '.html' with the executable bit set as needing CGI/Includes treatment but I apparently do not use it.
You're not actually supposed to find the .shtml files, I suppose I left links linking to them by accident somewhere.
I manually install everything, I haven't experimented with Ansible yet. It really isn't so hard to install everything when you know the software somewhat. The 'hard part', maybe, is when you are unfamiliar with the 'distro-isms'.
Debian for instance puts all the modules in their own directory and has 'apache2.conf' use files in 'mods/enabled/' which are just symlinked to something in 'mods/available/'. Every other distro I've used just has their 'httpd.conf' with each unused module commented out. With FreeBSD, if you install a module, the relevant details end up in a module directory and 'httpd.conf' has already been configured to load installed modules from that directory.
At the moment it is VPS but I am very strongly considering self-hosting again.
When jakesthouhts.xyz was on Arch, that was actually self-hosted from a Raspberry PI (again maybe do not use 'bleeding edge' for services).
Let's do the math:
Raspberry Pi 4 with 4GB of memory Tech: 1000f a CPU that maybe isn't the best 4GB RAM You can have storage size of anything that the RPI will take Unlimited bandwidth implied (except maybe what is imposed by an ISP (rip Americans)) Access to the hardware Maybe you have a static IPv4 Address Cost: One time payment to get the SBC Payment for the other things like power supply and SD card Electric bill maybe go up a few dollars Fun: Can hide at your friends house and use their internet to host your stuff for 'free' Current VPS that is hosting jakesthoughts.xyz: Tech: 150f a good CPU 512MB RAM 10 GB storage Unlimited Bandwidth explicitly mentioned Guest Machine that is ran along many other Guest Machines No hardware access Guaranteed static IPv4 Address Cost: Depends. I pay about 3 dollars a month for what is in 'Tech'. Better tech = more money spent. Not Fun: VPS provider can take away your VPS for any reason Sometimes the VPS node goes down and you have to wait hours for it to go back up FBI/your-local-equivalent can raid your VPS provider and force them to give a copy of *everything*
VPSs have a use but using one just to host your website is very much overkill. It is cheaper to just hook up an unused computer to a router and just have that as the server. Plus you probably will have significantly better tech than what a VPS provides.
Maybe the issue most people will have with self-hosting will be related to internet reasons. Not every ISP gives their customers a dedicated IPv4 address and even with IPv6 becoming a thing, some ISPs cannot figure it out - even though they offer it to their customers.
VPS and self-hosting requires almost the same level of technical skill. Maybe with VPS, you can ask for tech support but I've never done that.
If you are wanting to do email then using VPS is probably the way to go because it is likely that your own ISP put your IP address on a blacklist and probably will not allow you to change the rDNS for your IP address.
Obviously hosting from your own house can be bad, since if you get 'HackerNews-ed' or whatever, then your home's IP address will be DDOS by people trying to visit. People might intentionally try to DDOS just for lulz.
This is something you will have to figure out yourself. If you do the self-hosting thing, make sure to put your stuff on a different vlan.
One idea I've considered but am not totally convinced by is having a VPS just reverse proxy what you serve on your home stuff.
Normally.
Whois protection should be offered by your domain registar, by default, for free. In my opinion, some registers like GoDaddy will make you pay money for it. GoDaddy, in my opinion, is a dogshit domain registrar who will call you in an attempt to up-sell stuff. In my opinion, do not use GoDaddy or transfer away from them.
Aside from that be mindful of the reputations of services that you must work with, VPS, domain registrars, etc. Some of them are ran by incompetent people, who have nice epic leaks that spill all your PII for the world to look at.
Some people might think: 'You should NEVER trust a company with your PII!' Well, you never really expect it to happen until it does, then you realize the amount of faith you put in random companies. "Do I really want to sign up with this company using the email address of: 'firstname.lastname.DoB@email.com'?"
I don't use NGINX mirroring or any kind of caching (besides maybe client caching, but I leave that to default settings), my websites do not get visited enough for me to even consider it.
I put my websites on the 'deep web', so they can be visited via alternative routes, in the event of a DNS failure or I decide it is time to stop paying for my domain name or if ICANN/tld Owner decides to yank a domain from me for vague reasons which are never explained (This has happened to someone I know).
Honestly, a replacement for decentralized DNS needs to come soon.
I am making a point to never buy another '.xyz' tld because the owner can yank your domain and there is nothing you can do about it. Jakesthoughts.xyz will one day be transferred over to some other domain.
Additionally, some tlds are seen as 'unprofessional' and email deliverablilty can be a hassle. For example. '.top' tld is considered suspicious by spam-assassin, a program that detects spam mail. Some services, like Steam, will not accept a '.top' email domain as your email address. If some firewall programs block '.xyz' tld entirely, so you can be sure they will block other tlds as well.
Also, some SMS/IM apps will not even display messages that contains a 'bad' tld, like jakesthoughts.xyz.
https://www.spotvirtual.com/blog/the-perils-of-an-xyz-domain/The Tor project has made a PDF where they list each tld and rank them but I cannot seem to find it at the moment.
I want to at least touch on OpenNIC, you can get tlds from them for free but the offset is you need to convince pretty much everyone to use a DNS server that supports OpenNIC tlds. Good luck.
Your domain registrar will probably do DNS for you. You can switch over to different DNS server, like FreeDNS from Afraid.Org or even yourself (I am not really sure what the purpose of that is besides bragging rights and having to do more maintenance on your stuff).
At first DNS is extremely confusing, but the key details are: A/AAAA points to your IPv4/IPv6 address respectively. For 900f use cases that should be sufficient, there is also CNAME which just tells the DNS software to look at the CNAME result which I sometimes do for 'wildcard' subdomains.
Don't worry about breaking your DNS, you can always set it to something later, your domain will not cease to exist if you mess something up.
I cannot think of anything else so I guess that about wraps this article up. Thank you for the question, `random person from irc'!
In one of my previous blog posts I mentioned writing an app for the purpose of 'forwarding' text and phone calls.
There is such a thing as call forwarding where phone calls to a number gets redirected/forwarded to another number. This is not what I mean.
What I meant was mirroring: a call to one device (with an actual number) will be a call on all associated devices. A call from any associated devices will call from the one device that actually has a phone number. The text and phone state of the primary device (the one with the number) will be mirrored on to the other devices.
I will document my current thoughts on this app, maybe someone else can go far with it (and hopefully release the project on F-Droid). I won't touch it for a while since I am dealing with other another project.
I am writing this because I haven't found any apps that do this. If it exists (and is free software), please let me know so I don't have actually have to learn Java.
Each device needs to be able to contact the server in order to even function. In an office or a home setting this is probably not an issue since WiFi will most likely be available. When out and about, connectivity can be an issue, even when using hotspot as the range of hotspot likely is under 20 yards and is easily thwarted by cars and nature and bricks, unless some contraption is used, perhaps a yagi antenna, or some other antenna, which requires preparation.
It is not a guarantee that mobile devices will be able to even physically connect to the antenna to broadcast hotspot or to 'receive internet' somehow, except perhaps with a usb-c to female usb port which the antenna connects to and even then, perhaps Android won't know what to do the antenna. To be honest, I don't foresee many cases where the perfect situation will occur. I do know that a lot of places offer 'free' (as in beer) WiFi.
But in the event connectivity can be found, what about latency and all these other restraints? A server 'far away' relatively speaking, could have to spend a REALLY long time passing voice data along (imagine constantly interrupting each other because the delay is like 2 seconds).
What about using Bluetooth?
The range of Bluetooth isn't long enough and from what I've read, generally what uses Bluetooth would interfere with whatever uses hotspot or 'xG' or WiFi. Also, in Bluetooth terminology there is 'Receiver' and 'Transmitter' and Android apparently doesn't like to be the receiver.
The 'owner' of the device (not the person who rents the device and has to ask permission to use it and give up certain rights or give Google/some-evil-company certain rights) may decide that this 'mirroring' thing is bad and somehow violates some random vague terms of their contract. They may put restrictions on such app capabilities, so it is possible that this project is already dead in the water. However, given that XMPP apps exist, I find this to be unlikely. The main thing, that I imagine that they might take offense to is another device commanding the device with 'cellular' capabilities to do things like calling and texting.
It is possible, for whatever reason, that Android without being rooted will not allow audio from a 'phone call' to go to another location.
Another possible problem, directing audio from another source (the other phone's microphone being sent via 'internet') so that the other party can hear what is being said.
Server: The device/hardware that relays messages and statuses to other devices.
Client: All connected devices that receive from and transmit to the server.
Client 'Data': The connected device that actually has a phone number and can send calls and SMS.
Client 'No Data': The connected device that does not have a working phone number and cannot send calls and SMS by itself.
S2C: Server to Client.
C2S: Client to Server.
A server simply mean the software that will relay messages and statues to other devices. It is not capable of client stuff. However, there should not really be anything stopping one from installing server and client software on the same device; thus making the device act both as a server and a client.
One will carry a 'non-data' device around in their pocket and leave a 'data' device behind. Both devices will somehow connect to the server.
A 'Data' Client will connect to the server through mobile data (assuming it already isn't the server) and the 'No Data' client would connect via Hotspot/WiFi and also connect to the server (assuming it isn't already the server).
The 'easiest' server configuration that I can think of is similar to how XMPP works; plainly a standalone server software and clients connect to the server. Outside of this, I have no clue how the server should operate.
I had ideas about how a phone could be a client and a server but quickly had issues in resolving things that would for certain show up.
Easiest way, without any real knowledge of what is required, seems like it would be to fork an already created SMS or Calling app and add 'mirroring' functionality to it. It would be best if there were two apps, one solely for SMS and one solely for calling, but I suppose I see nothing wrong with combining the two. The Apps should have 'server' option somewhere where they can specify where the server is, and what kind of client they are, 'Data' or 'No Data'. (It would be possible to write a desktop client too, which would allow one to send texts and call from desktop.)
I have no thoughts on what protocol they should or should not use, at the moment, besides some vague notion that VoIP could work for calling.
I do not have any thoughts on what data storage method text messages should be in or how they should be transmitted between clients.
Main contention point is for phone calls: Client 'Data' will need to both relay what the other party is saying, decode and transmit what Client 'No Data' says to the other party. Vice versa for Client 'No Data'. If there are more than one Client 'No Data' then I don't have a clue what should occur besides Client 'Data' should 'lock' speaking access to one Client. (Of course, Client 'Data' can speak to itself)
Call Mirroring seems like it would be the easiest to implement since it is just audio and you don't need to save it or anything. The Server will need to send the data to clients that need it.
There needs to be a case for the following:
If someone manages to associate their device with the server, they can do 'bad stuff.' Therefore, by default, the server needs to have reasonably good security policies for authorizing devices.
I am less angry now.
Going phoneless was not an option, I suppose, since family members need to be able to contact me 'at all times'.
I've received a Samsung Galaxy A02S which I do not recommend anyone to buy. I hate Samsung now. I was indifferent but now I hate them.
One of the apps that Samsung preinstalled, literally not a joke, had 'The Great Reset' logo as its icon. I cannot 'disable' the 'Galaxy Store.' I've 'disabled' every Google App that I could and uninstalled the two that I was allowed to uninstall. (To my surprise, I was allow to use the phone without being forced to make a Google account or a Samsung account, however, Samsung did force me to agree to the Terms of Service and Privacy Policy just to even use the phone.)
But this is not why I hate Samsung. They've, or perhaps Cricket, removed the OEM unlock toggle so I cannot unlock the bootloader and root the device. I don't know who made this decision but I assign the blame to Samsung but I also hate Cricket too for forcing me to """"""upgrade"""""". ... There really was nothing wrong with the One Plus 6.
So, I am thinking about how to use my old phone without using the new phone and I've come up with a dumb solution. Really, the only thing I 'need' from the A02S is phone calling and texting (SMS, since Americans don't use instant chat apps, unless you count Apple's Messager app which confused my dad because he thought that you can send 'text' on Wifi and was surprised that Android couldn't and presumably now thinks Androids aren't as advanced as iPhone when it comes to messaging. Actually, most iPhone users I know don't know that Apple made their Messager app an Instant Chat which is why they always have their Wifi/Data turned on otherwise they will never receive 'texts' from iPhone users) and optionally when not at my house, data.
So when I'm out and about I will enable hotspot and use my old phone. When people text/call my 'new phone'..... IF the app exists, I will try to have it forward it to my old phone. If it doesn't ... hmm. Maybe I can write the app myself, can't be that hard? I know Perl so I'm probably already an expert in Java. If I end up writing it, I'll release it under a freedom respecting license and make it available to f-droid. In other words, you will not be forced to have a google account to download and use the app.
Honestly, A02S in every technical aspect, is worse than the One Plus 6, which is almost not an exaggeration except that the A02S has HD VOICE COMPATIBLE WITH CRICKET NETWORK. Is there even a good phone service provider out there that doesn't fuck their customers?
Disclaimer (aka: don't sue me, Cricket or Samsung): this blog post is an opinion and should not be used as authoritative fact on this topic.
It is a One Plus 6. Are you fucking kidding me? It can use 3G but it also can use 4G. I know that you are shutting down the 3G but did you really have to kill my phone number too?
To activate or continue using a device on the Cricket network, they will need an HD Voice-capable smartphone that is compatible with the Cricket HD Voice network. This includes HD Voice-capable devices from Cricket, and select BYOD (bring your own devices). Not all 4G LTE phones brought to Cricket will work on Cricket’s HD Voice network, unfortunately. Please know that they may be eligible for special offers if they choose to upgrade devices. They can dial 611 from their phones, or visit a Cricket Wireless store for assistance. For more information, you can check out the link provided:
Guess who's phone that was released in 2018 isn't compatible with the Cricket HD Voice network? :)
It's fine, I don't need a phone anyway, they are botnet.
Setting up cgit is very easy. The only reason why I didn't before was because I had this crazy idea that somehow setting up software for gitting would be 'hard'.
As always, actually installing it, configuring* it, and using it is very very easy.
Anyway, it is possible to access my personal git page by navigating to git.jakesthoughts.xyz. Gemini bros, I don't have a location for you, my apologies. ... I've seen git repos on Gemini and they always seem goofy to me. I am not against adding a git repo for Gemini but I've started to hit a limit on how much memory this VPS can spare, 512MB (or 473.1MiB)... and I have 8.5MiB free as of this very moment. Big yikes. cgit is a CGI script so it runs every time someone access the domain meaning it only uses memory when it needs to. It is also written in C which helps.
Ok, I added a '*' next to configuring. Most likely, you want syntax highlighting and about tab/page right?
This part is not so straight forward. Let me be clear on this: in '/etc/cgitrc' the order of things matter. 'source-filter=' should be above 'scan-path='. And the 'about-filter=.../about-formatting.sh'... 'readme=' affects what script it will `exec` based on the file ending.
So if you decide that the about page should reflect the contents, of say, README.md then make sure the script that gets `exec` can actually run! I, for instance, had to install pip and use pip to install the markdown module.
Reading the man page helped to identify that 'clone-url' is what people would use to clone your repos.
css=/cgit-css/cgit.css logo=/cgit-css/cgit.png enable-http-clone=1 virtual-root=/ readme=:README.md about-filter=/usr/lib/cgit/filters/about-formatting.sh enable-blame=1 clone-url=https://$HTTP_HOST/$CGIT_REPO_URL source-filter=/usr/lib/cgit/filters/syntax-highlighting.sh scan-path=/srv/git/
The reason I use /srv/git as the scan-path is because I symbolically linked to a directory in the home of a regular user. I could do this for any user and they would show up on the cgit index page.
To see what themes are available for the syntax highlighter, try '$ ls /usr/share/highligh/themes'. I am currently using 'peaksea'. Enable it with the '--style' flag.
The easiest way that I've found to use git:
Server: in the git directory, in my case ~/dev/,
Local: change directory as appropriate, $ ssh git clone ssh://username@remote-server:/home/username/dev/git-repo'
Suddenly, everything is set up for you. When you decide that it is time to push to the server:
This is the easiest way that I've found to do gitting.
There is probably an easier way of doing it all. I am still new to gitting in general.
In this blog post I will be talking about F.E.A.R. and it's sequels. A diligent observer will notice that I have listed F.E.A.R. as 'S Rank' in my 'game reviews' webpage (this will be the first one since putting up that webpage, almost a year ago) and have ranked all of its sequels as well but not as 'S Rank'. I played the DLCs again for the sole purpose of this review and realized that Extraction Point is a solid 'B Rank' and Persues Mandate is a solid 'A Rank'. F.E.A.R. 2 goes straight to 'D Rank' and F.E.A.R. 3 ends up at 'F Rank'.
If you never have played any of the sequels but you are a fan of shooters I recommend you to stop reading, lest you get spoiled, and actually play it. 'High Difficulty' for F.E.A.R., 'Normal Difficulty' for Extraction Point, and flucate between 'Hard Difficulty' and 'Easy Difficulty' for Perseus Mandate depending on the hostile(s). The reason I recommend this is based on how much fun I had playing each of the titles.
For F.E.A.R. 2 and F.E.A.R. 3 you can choose what you want, it doesn't matter. Both do not come even close to the candle that F.E.A.R. holds as they are F.E.A.R. only by name. Though I have beaten F.E.A.R. 2 on normal and 3 on hard difficulty I do not recommend it. For F.E.A.R. 3, playing as the... err... 'bad guy' makes the game much easier than playing as the 'good guy' character, due to the way they handle combat. I am unable to play F.E.A.R. 3 anymore on Linux due to DRM. Anything I say about F.E.A.R. 3 will be based on memory. Truthfully, any complaints about F.E.A.R. 2 will apply to F.E.A.R. 3, this I have no doubt about.
Playing F.E.A.R. and the expansions with Steam's Proton works fine; however, with Extraction Point and Perseus Mandate you must manually edit a config file to get the screen resolution you want. For convenience, these commands will help you find the correct file: find ./ | grep settings.cfg
. Those commands assume that your current working directory is somewhere above where you installed F.E.A.R.. Additionally, the game should be crisp - as in the microsecond you start or stop moving the mouse the game responds accordingly. If you notice a 'floaty' experience, maybe adjust FSAA and texture filtering.
Among the gamer community I notice that people view F.E.A.R. as a scary game, I mean the box art is spooky... The name of the game leaves a lot to the imagination: Why did they name the game F.E.A.R.? Is that the emotion I will experience the most when playing this game?
Some people are scared of scary things. They avoid being scared, which is understandable. However, in F.E.A.R., Alma is not a scary thing. Neither is Fettel, neither is any of your buddies-now-ghost. There are jump-scares but they are hardly worth freaking out about, they are just unexpected. What is scary is a soldier accidentally jump scaring you; that is the most scariest part of the whole game because they can actually do damage to the player. But one do not play F.E.A.R. to be scared, one plays F.E.A.R. to kill the enemy in a manner which not only makes one feel like a God among men, but also because of the beauty in it. At least with the first F.E.A.R., anyhow.
I looked at the price of F.E.A.R. on Steam and learned something incredibly sad. Warner Bros have locked the game behind a bundle and you cannot buy the game independently. You must shell out $55 and get the bundle to play this game on steam;. On GOG, 'F.E.A.R. Platinum' is $10 which contains F.E.A.R. and its expansions. Warning to non-Europeans: GOG is European you may have to spend extra money because of this. Reviews and complaints on the F.E.A.R. forum(?) page on GOG suggest that DRM gets installed... No winning for people who want to own the game legitimately, eh?
After fighting, when one inspects the battleground there are combat signs everywhere, the walls plastered in bullet decal effects, enemy bodies bloodied and dead, objects and items strewn about... etc. My wish is that these things stayed long after the combat has ended.
It is easy to get turned around in F.E.A.R. and when you walk down a pristine hallway... <i>This looks familiar... Have I fought here? Had there been obvious combat signs I would know immediately I am going backwards. Thankfully, in F.E.A.R. it is usually clear where to go. Sometimes it is not and you spend like 10 minutes looking for where to go only to find you need to climb a ladder. A lot of care has went into the level design not only for driving the player forward (usually) but also for the AI to take advantage of.
The AI in F.E.A.R. is truly one-of-a-kind, in a good way. Fighting them ACTUALLY feels good... Of course, just reading what I have to say about the AI will do it no justice! You have to experience it to enjoy it! When playing a different game you feel like you are fighting the computer. They make semi-predictable moves and do things that are just... Well, in multiplayer no one would do those things. However, in F.E.A.R. it feels different. Players might do these things! A player definitely would flank you, as the AI in this game is fond of doing! The enemy combatant provides 'cover' fire for their buddies and it doesn't matter if its effective or not because it IS happening. The player FEELS suppressed because he IS suppressed and not because some lanky function that detects the amount of bullets coming near the player and applies a 'suppressed' effect. Of course, if you are new to video games, enemies shooting at you mean nothing; you are not the one who dies after all. But if you treat the game like it's REAL LIFE (lol)...
I yell something about covering fire. My squad runs forward to flank -- wait a minute! WTF is that hostile doing? He ran right into my bullets! He's in the OPEN! "Waste this DUMBASS!" I telepathically communicate to my brothers. The entire squad lights up the hostile with bullets and -- wow! He can apply medkits really fast! He goes down quickly. He must have been low on medkits because... Well, no one sane does what he did! We crowd his body and yup, that is the guy that somehow killed over 500 of us. 'How did he kill so many of us with such poor tactics?' I think to myself. Or I would, if I wasn't a replica soldier whose only line of thought is completing the objective.
F.E.A.R. is such a fun game and it frustrates me that other shooters are not as good as this one is. It was released in 2005 and somehow I find this game to be way more enjoyable than others. However, that is not to say that F.E.A.R. is a perfect game; there are some enemies that are obnoxious.
I will list the most annoying enemies from high to low annoyance: 1. The 'Y's, 2. The Mech, 3. The Ninjas, 4. The Turrets. The 'Y' enemy is the most annoying because it can fly and it will cause you to waste several medkits, per group (2 in a group). I found that 3 shots from the Particle Weapon will kill them in hard difficulty. You can aim at their 'limbs' as well. The mech enemy is also annoying; even though the player fights around four of them, they still do much damage to the player. I found that five sticky grenades will kill them instantly or be very close to killing them. I used a rocket launcher at them but that does not seem to do much damage, as a rocket will miss the mech unless the range is <10 yards (the mech can and often runs 'left' or 'right' and the rockets don't fly fast enough). The ninjas appear only twice through out the first game but they annoy me greatly. 4. Turrets. There is a way to cheese them and kill them without taking damage but it takes some time.
Weapons in F.E.A.R. are good as well. The Player can only carry three at a time and at the beginning will look like this: Pistol, SMG, Rifle. After some time it might look like this: Shotgun, Rifle, and 'heavy' weapon. I have nothing really interesting to say about this but will stress that just because you see a rocket launcher doesn't mean you must or that you even should drop one of your other firearms. The rifle is a viable weapon towards the end. I assume the pistol is as well, but I always drop it to replace it with a weapon with higher DPS like shotgun. The pistol doesn't shoot fast enough for my liking. There is a scoped weapon which actually does decent damage against 'heavies'. I greatly prefer the first rifle you find. It really does work throughout the game.
When one looks at the environment in F.E.A.R.... yeah it is not exactly pretty, but it gets the job done. If this game had Halo Combat Evolved graphics that still would've been fine with me to be honest. I am not a graphics snob but I do require a game to have graphics where it matters. F.E.A.R. actually does this which is good. Other people may complain about it and I am not sure that I can blame them. A modern phone probably can play F.E.A.R. on max setting with no problem to be honest. The battery will drain though, that is for certain.
One part in the story that I don't like is when the Point Man is required to kill Fettel. The game refuses to progress beyond that point, unless the player MURDERS Fettel. At this point the game is basically over and the player probably will play the next game, Extraction Point.
Again, I recommend 'normal' for Extraction Point because that game... I did not enjoy myself as much as I did in the first game. The main reason is the enemies and their placement is often bullshit.
Second reason is someone had a VISION. That vision requires locking Point Man out of using his abilities/flashlight until the 'scene' was finished. First of all; Point Man has a very special power due to his genes and his abilities including the usage of the flashlight absolutely should not be taken away from the player because if the player was actually Point Man and the player wants to activate uber-reflexs when something spooky happens he should be able to AT WILL. Not when the game designers decided "ok, mates, we spooked the player, let the player use Point Man's abilities and flashlight again!" I rather feel like I am Point Man and enable slow-mo and take a shot at the spooky-thing than realize the game designers needed me to understand their stupid vision. And when I say vision; Point Man in the first game often has hallucinations but it does not take away Point Man's abilities in the middle of the game, when he is not having hallucinations.
Third reason is level design really went down hill. Fighting in the apartments is so dull.
Forth reason is not a game-play reason but instead the game story reason. As Fettel himself puts it:
"I know it doesn't make sense. Not much does anymore."
I agree - the story in this expansion really affected my enjoyment levels. I care but when the developers do not, why should I?
The doors must be possessed: they shut by themselves.
For Perseus Mandate... Now that I am playing it again for the sake of this review I realize that I have placed it in the wrong category as it should be A Rank. I will fix this soon.
I recommend 'hard' at least at the beginning then 'low' when your enemies are... Not so great. Then change it back to hard. Why you ask? At the beginning the game is almost exactly like F.E.A.R. with an exception of a few 'game designer vision' things and then you meet the 'super ninjas' who are so annoying to fight. They can take more than 5 shotgun blasts in the face and still live (even on 'Low Difficulty'). You can SEE their hairlines. You have to fight the bossman of the super ninjas which is just not enjoyable at all.
Overall Perseus Mandate actually was fun with the exception of the Super Ninjas who I lower the game difficulty for, so this game is actually A Rank - not B Rank and Extraction Point is B Rank - not A Rank. Amazing how the last fight of a game affected what I thought about the game overall; and when I thought of Perseus Mandate I always thought of the boss fight and how not fun it was. After wasting the annoying bossman, the player (assuming no prior knowledge or not having read this post) might think: "Overall I had fun! Can't wait to play F.E.A.R. 2 and see how they improve on... well everything!"
For F.E.A.R. 2 there are actually more than one game but I only have Project Origin. I have no desire to pay for the DLC known as Reborn... I have to buy 'FEAR Complete Pack' which is $55 USD, just to get the DLC. It cannot be bought independently. Warner Bros has made it this way. What this means is, someone like me who bought F.E.A.R. 1 + DLC, F.E.A.R. 2, F.E.A.R. 3 will have to buy them all again just to play the DLC. I will not do this.
One thing one notices immediately is that the player can't LEAN. Ah, but at least they added sprinting! Only for 6 seconds and only increases your speed by x1.3! Then Beckett develops asthma and can't run again until his sprint meter fills up again. Actually the first thing one notices is the disgusting permanent UI overlay. And oh man, that font is GIANT! Man the bullet decals are really... Not F.E.A.R. looking at all. At least I see the 'x' on my cross-hair when I hit a hostile! Man the dialogue is reallly suffering. Did a child write the dialogue or something? A child would say these 'evil things' when role playing as a bad guy... Man did that guy really just say anime and pizza in the same sentence?
When one stands back and looks at what was presented before them, one can only conclude one thing: this game 'F.E.A.R. 2', which is nothing like 'F.E.A.R. 1', was designed for consoles. Absolutely nothing about this game inspires me. Think of the most generic shooter that you know of. You now have experienced F.E.A.R. 2 and 3. Sure, sure the AI that I was raving about... It exists in some form... But everything about the game sucks. I am so bored. They should've called this game B.O.R.E..
Even though I have not played F.E.A.R. 3 lately, I am certain that the same criticisms I have of F.E.A.R. 2 will apply to F.E.A.R. 3. One cool thing, I suppose, is that F.E.A.R. 3's campaign is two player! Even if I was playing on Windows, I cannot force anyone to play this game with me in good concious.
So the story is interesting... sort of. Well, on paper it is boring. Actually the story... Hmm... You don't play this game for the story. Well... the story makes you connect to the game but it is just the 'background'. There is no intertwining plot or anything. It just sort of exists and you experience the journey that Point Man / Sargent / etc experiences.
So, canonically F.E.A.R.'s expansions, Extraction Point and Perseus Mandate, do not happen. Instead, it is F.E.A.R into F.E.A.R. 2 into F.E.A.R. 3. To be honest it doesn't really matter. It if was up to me, the first game and its expansions would be canon.
So, F.E.A.R. is really good in my opinion. To me it sets the baseline of what a first person shooter should be like. It is also a slog. It is not a perfect game but it is good. It is fun. Which all a game really needs to be. I think most people should play this game at least once, if they can handle the 'spookyness' which is barely spooky at all. Alma scares you a few times but it is just jump scares. Nothing damaging. Enjoy this webm (iToddlers bfto!) where I play the game!*
Note: * = Please forgive the /g/-ism, if Windows can do Webm and the most uncommon desktop OS can do Webm but the second most common desktop OS cannot, then that is just sad.
"Wtf? Jake, I am not reading all this text. Give it to me straight: is it worth it to GPU passthrough?"
Honestly? In my opinion? ... It's fun to set up! Passing the GPU to and from a VM involves restarting X each time which is annoying but since I'm GAYMING its not that bad, but sometimes I think I might as well just dual-boot. But I hate Windows too much to do that. So, this is mainly for bragging rights.
I have began to replay a game I haven't played in a long time since that game doesn't work with Proton. It's a bitter-sweet nostalgic trip - the game isn't as good as I remember.
Recently I have began playing a video game. It isn't too graphic intensive and Steam's Proton handles it perfectly. Except for one small, tiny, miniscule detail: my operating system isn't Windows so, obviously, I am not allowed to access the online features of this game. But I want to access the online features of this proprietary game...
This leaves me with either: hacking Proton and somehow tricking Easy Anti Cheat (EAC) or installing Windows, either dual-boot or through a Virtual Machine (VM). (Some anti-cheat software, like EAC, run along the Windows kernel as a kernel module to make sure you aren't 'cheating'. :)
I am not smart enough to hack Proton, additionally I strongly dislike the idea of having to restart my computer just to play video games and reboot into */Linux when finished, so rather than dual-boot I decided to install Windows into a VM. I will passthrough my GPU which would allow me to play video games almost as if Windows was on 'bare-metal'... Almost like it.
Here I will note some things that are useful to know in a situation like mine:
$ lspci -knnv | grep -e '\[....:....\]\|IOMMU\|Kernel' | sed -e '/Subsystem/d' -e 's/\(.*\)IOMMU\(.*\)/\tIOMMU:\2/'
` With Arch it is a simple matter of accessing my package manager to download and install linux-zen since Arch officially supports the Zen kernel (they provide a binary so I do not have to compile it).
However, this may be naught if you decide you need to do some RDTSC trickery so you'll end up compiling a kernel anyway - it is a lot easier than you might think but definitely time consuming. In my case, EAC doesn't ban VMs so I did not bother patching any RDTSC trickery.
Passing the GPU while I had a spare GPU for GNU/Linux is pretty easy. But I was not happy with this configuration because my monitors had insufficient amount of plugs... My main monitor only supports 1 VGA and 1 DVI. My TV 'monitor', only 1 HDMI, 1 VGA, and 1 RCA. Neither of my GPU's support VGA so it wouldn't be possible to tell my main monitor to change it's source which would've been the easiest solution.
In other words, one half of my monitors would be off unless I always use a VM. This annoyed me, so I decided that I will stick with one GPU and that I will pass that to and from a VM. This is known as single GPU passthrough, for which there are many tutorials for. With this configuration, I pass through my main GPU to Windows (even though I was using it previously!) and use the motherboards iGPU for displaying GNU/Linux.
I ended up having to dump the bios of the GPU I was using, a Nvidia card, because when I would try to start the VM the GPU would have an `error 43
`. Remote desktop helps in identifying these situation. You could use Spice too, I suppose, but at that time I didn't use my motherboard's iGPU (I probably should've. Hindsight is 20/20). Fortunately, after commanding Windows to shutdown, the GPU would be successfully passed back to the host and X would restart, using QEMU + Libvirt hooks. So, at least, that was half of the battle done.
When I was trying to dump the bios of my GPU (and nothing was using it, including vfio-pci as I unbinded it prior to dumping) and I was greeted with `cat: rom Input/output error
`. I found that setting the kernel parameter 'vfio-pci.disable_idle_d3=1
' and running '# setpci -s 01:00.0 COMMAND=2:2
' would allow me to dump the rom. Afterwards I ran '# setpci -s 01:00.0 COMMAND=0:2
' though I am uncertain how important that is. I made sure to remove the kernel parameter as well.
Another detail that I've seen is to open the resulting rom file in a hex-editor and look for something that starts with `VIDEO
` and delete everything above the `U
`. Everything - all they way to the top. However, I did not need to do that since my dump didn't contain anything above the `U
`. However for this advice was for Nvidia cards, I am not sure about AMD.
Some tutorials recommend booting into Windows 'bare-metal' and using a program called GPU-Z but I haven't tried this.
As of this date, Dec 19 2021, I can confirm that EAC (at least implemented for Xenoverse 2) doesn't ban VM users but they may decide to change their mind in the future. If that happens I will hopefully learn my lesson about playing proprietary games and will never do it again. Doubtful though.
An aside: I had a paragraph explaining that Windows doesn't properly shutdown GPUs when it itself shuts off thus resulting in instability but I discovered that actually it was a bug in Mesa that was causing graphical glitches rather than passthrough doing anything weird. This paragraph serves no purpose but to remind myself in the future that maybe it is just the software. Though with AMD, this is a legitimate concern since their GPUs have reset bugs. There are some work-arounds for them: Window Pro allows one access to 'shutdown scripts' that Windows will run prior to shutting itself off where you can turn off the GPU. If you don't have Windows Pro or better then you aren't allowed access to the shutdown function, and need to fork out several hundred dollars to upgrade. However, there is a host side option called vendor-reset but I do not know much about how it would work.
An additional aside: AMD's Adrenaline, when trying to install the driver for a minimum of 10 minutes, it has so, so, so many ads. The AMD installation failed for me so I decided to poke around in the logs and discovered that AMD attempted* to send analytics to Google! What a different world from package managers. Also, I very much dislike it that they replaced every driver link to their Adrenaline software.
"Why yes, I would like to download software that is half a GB big with the sole purpose of downloading and installing drivers for me! It is too hard to just download the specific driver for my specific GPU and click install. I am just too stupid for it!" - Imaginary person that AMD created and somehow believes we are him.
With Nvidia... I know they get a lot of hate for how they treat the */Linux community (well deserved IMO), however installing the specific driver for my GPU on Windows was very easy, even if it was in the second-slot. It was so easy I actually cannot remember anything noteworthy about it. Nvidia had a page for that specific GPU that contained the driver, whereas AMD decided that made too much sense. Gotta make those 3¢ per ad, don't you know?!
Note: * = attempted to, since the guest was subject to the host's `/etc/hosts` :)
If you wanted to view my XML sheet and hooks for some reason:
win10.xml[Author's Note: I don't actually know anything about fingerprinting technology so if I give you the impression that I am an expert or something, I am not! (Advisory for North Korean wanna-be defects {I don't want them to get deleted thinking they're safe - I don't actually know anything about fingerprinting technology will say about TLS connections besides "they're TLS but ... X" and I do worry what 'X' could be. But for most people having TLS is completely fine.})]
Many moons ago I when I was away from my desktop I had my laptop and my phone. Perhaps I realized that I wanted to edit something on my computer, so I thought a thought that most people would think: "I'll just use my phone's hotspot and SSH into my computer!"
Unfortunately, this did not work. I remembered to allow SSH on 443 and setup port forwarding since I was behind a NAT for the next time I was out and about.
Bafflingly, this does not work either. It was on port 443! ... I realized that the answer must lie not only with ports but also what the connection looks like because for some reason my phone's ISP is fingerprinting*.
A word about 'modern' TLS/SSL that old guides don't mention because it wasn't a requirement at the time. A cert and a key file is required, if it is not present this will not work unless some software (probably OpenSSL (my current version is 1.1.1l)) is outdated. Here is what I am using, those marked with an '*' are entirely optional and you can use what you want.
Software DescriptionOpenSSL | To self sign a cert and key |
Nginx* | To proxy the SSH server on 443 |
socat* | To connect to the SSH server |
I'll assume you are smart enough to figure out if you have these or some reasonable equivalent installed or not. My guide will be based these specifically but I will include 'honorable' mentions. I will also assume you have basic understanding of how *nix operating systems works, how to configure software, and you will at least look at the man pages for each software.
Please note that is does require setting it up before hand; if you are on vacation and are in a similar position as me, you're SOL. Make sure to bookmark this page and revisit it when you get back home, eh? (Also there are A LOT of tcp-over-tls services and I would not be surprised if they were all ran by one person.)
You should make a directory where your certs will be. Ideally, only root will be able to read and write in it.
The very first thing to do is to generate the cert or nothing will work and you will get very angry. So, run the following: '# openssl req -x509 -nodes -newkey rsa:4096 -keyout key.pem -out cert.pem -sha256 -days 1000
'. When it asks for information, in this case, it doesn't matter. You might as well just press enter on all of them. In this instance, we will not verify that the cert is 'valid.' Of course, if you have a real cert along with a real domain name that is issued by a real CA then use that.
Nginx 1.20.1
http { ... } stream { upstream ssh { server 127.0.0.1:22; } server { # IP address under the NAT listen 192.168.0.68:443 ssl; #listen [::]:433 ssl; ssl_certificate /var/cert/cert.pem; ssl_certificate_key /var/cert/key.pem; ssl_protocols TLSv1.2 TLSv1.3; ssl_ciphers HIGH:!aNULL:!MD5; proxy_pass ssh; } }
The above block cannot be in the http block. So, if you have a `sites-enabled/` directory and in nginx.conf `include sites-enabled/*.nginx;` IN the http block, the block above will not work. Why? Because SSH protocol is not HTTP. They are different.
The simplest 'fire-and-forget' way to connect to it (that I've found) would be with socat.
$ ssh -p 443 -o ProxyCommand='socat STDIO OPENSSL-CONNECT:%h:%p,cipher=HIGH,verify=0' 192.168.0.68
At this point your SSH connection should be concealed by TLS on port 443. Fingerprinting will think it is just normal TLS traffic, i.e. HTTPS traffic. With that the guide is over... But typing ssh -o ProxyComannd='...' every time is tiresome! Edit your ~/.ssh/config
:
host ssh_tls Hostname <IP address> Port 443 ProxyCommand socat STDIO OPENSSL-CONNECT:%h:%p,cipher=HIGH,verify=0 User <username>
With above you can very quickly ssh like this: '$ ssh ssh_tls
'. A lot quicker, eh? (What is this? Hint: '$ man 5 ssh_config
')
Socat server: # socat OPENSSL-LISTEN:443,fork,cipher=HIGH,verify=0,certificate=/var/cert/cert.pem,key=/var/cert/key.pem TCP-CONNECT:localhost:22
Socat client: $ ssh -p 443 -o ProxyCommand='socat STDIO OPENSSL-CONNECT:%h:%p,cipher=HIGH,verify=0' 192.168.0.68
Socat (1.7.4.1) is an honorable mention because I do some web-development and turning off Nginx/Apache was not really an option. The verify option can be removed if you have a real cert that isn't self-signed. Socat is an interesting piece of software and can do many other things, not just this.
Stunnel (5.60)
pid=/tmp/stunnel-ssh.pid [ssh] client = yes accept = 9048 connect = 192.168.0.68:443
Stunnel is a little confusing to understand at first but after you understand it, it becomes 'ezpz.' Basically, you set up stunnel first, which binds to a port and stunnel forwards everything that goes into that port to go to a specific location with TLS. So in this case everything on *:9048 should be encrypted with TLS and go out to 192.168.0.68 at 443. Don't forget to turn off/kill stunnel after you finish using it. (Check the reference for stunnel to get a 'fire-and-forget' script.)
Apache (2.4.51) and proxytunnel (1.10.20210128)
# Change IP to yours under the NAT <VirtualHost 192.168.0.68:443> SSLEngine on SSLProtocol -all +TLSv1.2 +TLSv1.3 SSLCertificateFile /var/cert/cert.pem SSLCertificateKeyFile /var/cert/key.pem # Normally you want this turned OFF # otherwise you are an open relay ProxyRequests on # Only allow port 22 AllowConnect 22 <Proxy *> # Disallows proxying for everything... Require all denied </Proxy> <Proxy 127.0.0.1> # But allows proxying to 127.0.0.1 Require all granted </Proxy> </VirtualHost>
Make sure you have enabled: ssl_module
, proxy_module
, proxy_connect_module
. Additionally, make sure you actually have a `Listen 443
` somewhere. Turn Apache on or restart it.
Try this: '$ ssh -p 443 -o ProxyCommand='proxytunnel -z -E -p %h:%p -d 127.0.0.1:22' 192.168.0.68
'. If it works, it works and you should test if Apache will allow proxying to other locations (it should not and if it does you did something very wrong). If it results in an error, check if Apache is turned on, then check Apache's error logs.
The difference between connecting to Nginx or Apache is how the proxy pass is done. Nginx proxy passes to 127.0.0.1 automatically (reverse proxy) and with Apache you tell the server that you want to proxy to 127.0.0.1 on port 22 (forward proxy [admittedly, only to itself]). I'd like to reverse proxy with Apache but Apache is best suited as a web server rather than a proxy server. Don't use a spoon as a fork - kind of thing.
Now, if I had many services that I wanted to offer but only had port 443 I'd use forward proxying which Apache and/or Nginx (with some difficulty I must mention) can do.
OpenSSL (1.1.1l).
$ ssh -p 443 -o ProxyCommand="openssl s_client %h:%p" 192.168.0.68
Similar to socat but less typing I guess. It is an honorable mention because if a packet is bad then it gives up entirely and I haven't figured out how to tell it to ignore them. If you look at the man page '$ man s_client
' you might notice a `-reconnect` but that does not do what you think it does. It is also somewhat noisy.
Brief thoughts on SSH over TLS: double encryption with TLS and default SSH to me seems like wasted CPU cycles. If possible it maybe better to use something like OpenVPN or WireGuard but not every phone will have that and it is a question if a device connected to hotspot will even be able to connect to other devices with OpenVPN or WireGuard because connections are being fingerprinted. With this, it is a guarantee that you will be able to connect to your computer, at least until fingerprinting software figure out how to do more privacy invading things. Also use ssh-keys with SSH and disallow passwords.
I suppose one could use rlogin or telnet with TLS (trivial with with stunnel) to reduce the CPU cycle waste but I won't write the guide on that - I don't want to be responsible for anything bad happening (since rlogin and telnet send things plainly) and it doesn't seem like that big of an overhead anyway. (What? Your computer can't handle TLS and SSH at the same time or something? lol! Stop using a 3rd world computer already. ... Although if it is that old then it probably isn't back doored... Hmm...)
If you have a spare computer (under the same NAT) do the same commands and if it works then you're golden. If it does not work then obviously it won't work when you use hotspot. I'd avoid connecting with the hotspot if you are unsure if you set it up correctly - when they disconnected me I couldn't connect back to my home IP address. I waited for about a day for the block to expire (not that I was constantly testing).
Jake, you are a liar and a techlet! This will not work!
Compare these files then, please. Wireshark is pretty good at it.
SSHIf you see TLS inside the NAT you will see TLS outside the NAT as well, meaning hotspot will work. If you are on the technical side you will be able to see how you can configure a lot of this in somewhat different ways. Now, if you are exposing your stuff to the outside world it goes without saying: make sure your stuff is protected/hardened/hard-to-break-into.
Other software can do proxying as well, like Squid, but I chose to focus on Nginx and Apache since I actually use them on a daily basis.
I compiled this from various sources so future me will not have to spend hours tweaking search phrases.
This post would not have been possible without the following resources:
(Reddit - socat)Disclaimer: * = When I was able to connect to my computer for like 5 seconds then suddenly it would no longer work my first thought was "THEY'RE FUCKING ME!" and caused me to write this blog post in mainly because I wanted to fuck them back (my reasoning was that they could block SSH but they won't block TLS). I don't know what happened but I can SSH into my computer with hotspot. I am not sure what to make of this.
I hope everyone had a good halloween! I, unfortunately, went to take a nap and when I woke up I missed halloween :(
Anyway, I am running a dictionary service which may interest some people. Basically, it uses the same dictionary that the wotd service uses but this time you can specify what word to look up. I should mention that because of the dictionary's age, some modern words like 'zoom', 'yeet', etc, will not be present.
It has an API!: $ curl -d "word=the word" https://jakesthoughts.xyz/dictionary
Because of that it is also very easy to impliment into a bash function:
function lookup() { curl -d "word=$1" https://jakesthoughts.xyz/dictionary }
There are other dictionary programs that you can use, of course: Artha, Goldendict, ... there aren't actually that many. Hmm. I haven't used any of these programs so idk if they are good or not. Artha claims to be able to use offline copy which I would be interested in had I not already done this.
Jake you'll just be a creep and see what words I look up!
No - All I will see is someone accessing the url. If this is a major concern, you can download the source script yourself and run it locally.
The script. The dictionary I recommend (it is so old it falls OUT of copyright.)
You can directly visit https://jakesthoughts.xyz/dictionary but I am not applying any stylesheet to it (meaning black text on white background), so your eyeballs will melt if you got used to my current stylesheet. As for Gemini users... I haven't written this yet but I figure the easiest way to serve FCGI content would be with yet another CGI script that queries the FCGI script since Doppio (and I assume many other Gemini servers) don't do FCGI.
Enjoy!
The title may suggest saving your favorite websites or saving images or saving videos to your hard drive. This is just a subset of what I meant, albeit a useful endeavor. My real intentions go further than that: Print that website out! Print that image out! Burn that video to a DVD disk! Print/burn EVERYTHING!
Meme sort of related but doesn't go far enough. Hard drives and other forms of electronic memory can FAIL.
But J-j-jake! I don't have enough ink for that! I don't have a DVD writer because modern gaming computer cases do not create space for them!
Pathetic. I suppose I can settle with you just saving everything on to a flash drive. I actually happen to possess a rather sizeable flash drive that I update infrequently - so I am at least understanding on that front. Also `ink` 🤭. I will let you in on a secret regarding printers: a single cartridge (toner) for a lazer printer will typically last longer than an inkjet cartridge by sometimes thousands of pages with a cheaper cost per page. The hardest part by far will be getting the printer to work but that is a different story for another day.
Civilization - Institutions, Knowledge and the Future - Samo Burja (37 minutes, a good video.)
After watching the Civilization video, I was left with some kind of impression specifically about the past and how lessons of the past often BARELY made it to the present.
This fear encourages the schizo within. What if the internet breaks? What if electricity gets turned off forever? You can't read your blog if it's intangible! Obviously that would never happen unless aliens invaded Earth. And I'm not saying that aliens will invade Earth and target these specific weak points... but they could. The chance is non-zero. How much data exactly is intangible and therefore at risk? I suspect a rather high degree and if something bad happens that semi-intangible data will remain forever intangible. Now, I also think in the event of a real alien invasion (real or imagined), they would probably bring back the 'internet' but only selectively and with lots of restrictions meaning your creative blog posts probably will not make the cut. Additionally, 'privacy' will most likely be heavily discouraged, so Tor, and other methods of getting 'anonymous connections' will not be allowed to function the way they have previously. My uninformed schizo-take on technology but whatever: maybe they will make it so that packets will require some kind of identification just to be transmitted[1]. I should stop giving them ideas.
A different perspective if my favorite boogy-man, the aliens (a place in for some powerful entity), doesn't sit well with you... Internet archives ... *can* be modified! One example that I know of is nearly every archive for original mewch, one of my favorite chans before it got [REDACTED]'d, does not exist even though, according to others, it used to. You cannot find internet archives on them even though they did exist at one point (I've made some personal copies of some threads but not enough, something I regret). Unless they are somehow made immutable through something like the blockchain, files can be removed or worse, altered. Of course, physical paper can also be modified and destroyed but the effort to do this would require it to be deliberate or a very bad case of carelessness. Paper documents will last much longer than electric documents would because they are already physical and not an abstraction somehow created from 1's and 0's that also somehow appears in a logical manner on a screen. Paper documents can also be converted back into electronic documents and printed again.
Regardless of the hypothetical risks, having a printed copy of something makes the intangible tangible (sure, you could argue semantics about 'what *is* written language? How does the brain interpret letters in such a way that we can understand abstract ideas from random chicken scratch?' but I think the planet's lingua franca will be somewhat resistant to being eradicated, take a look at Latin or ancient Greek for example). You DO have a piece of history. With luck it will find its way to the right person in the future. Maybe it will end up being a 'redpill' or maybe every one will greatly enjoy the story 1000 years later or maybe future readers will think 'wtf were they doing back then?!' or maybe the religion will gain a new follower or whatever. To me it doesn't really matter what the content is as long as it can reach the next generation(s) somehow.
I am doing my part! I've printed my entire blog! :^) Future historians will thank me for it.
I've also printed out some holy books, fiction, philosophy, and other things that I enjoy. You get bonus points for reading what you've printed more than once since you are making that paper pay for itself. Information is valuable and nearly priceless - worth more than the paper itself. Next step might be organizing it somehow and I cannot offer advice on that though I want to. Another benefit of printing is that you can annotate the paper without feeling guilty. It's not a $70 book!
Jake, I am totally unable to acquire a printing device and even if I do, what I print will be used against me regardless of the content printed.
Hmm... I hope your future will change for the better so that you can spend hundreds of dollars on paper, a lazer printer, and some toners. When I say print I mean print, if you are printing a book that you could buy, maybe buy the book? Perhaps printing will be a waste of paper if are going to end up buying the book as I have for some titles. Don't buy eBooks though, unless either: you figure out a way to print them, or it can be transferred to your file system.
Also, I heard that looking to the past is like looking to the future. Be someone's past so they can see their future! Or something.
[1] That idea alone kind of spooked me. I have thought of some things that might help in dealing with it. It would be a good idea for people, myself included, to learn about 'underground' ways of connecting to the internet (more than just Tor and I2P and Yggdrasil). Maybe look into what is needed to create some kind of private intranet that could connect to other private intranets. I believe this will reduce the power that 'turning off the internet' will have. Off the top of my head, large mesh-nets seem like a decent-ish option, though I will plainly admit I don't really know how they work besides connected devices are server-clients. I agree that it will be a pain to get people to even experiment with mesh-nets as with everything technology related especially when their internet already 'just works'. If one can create or join a mesh-net community, it would be a good idea to use TLS since who knows what the other nodes are doing. Ah, but if the mesh-net gets super big then the FCC might get force themselves to get involved and... hmm...
(This blog post was 'in progress' before Facebook went down for several hours while the media is pushing that they should have a seat at the UN. These incidents did encourage me to actually finish this... Lately I have been having difficulty saying 'yes, this is finished.' I have to almost impulsively publish blog posts (the Doppio cgi post was finished before the Gemini blog post, for instance) otherwise I will try to perfect them forever.)
for the past 2 months i worked on offering various services on my own because:
i'm using v*ltr (don't want to advertise for free) (and no i don't like l*ke) for now because i'm a 3rd worlder and i don't have fiber in my area
tl;dr it's poopoo ass garbage internet impossible to use for hosting
so much for decentralization idiot
anyways, i plan to keep all original web content (websites i write myself/static stuff) on neocities and all "services" on nauguscave.xyz
click this to go see GEGENKULTUR
this is a collab website based on a repo where people can add articles to it
why is it focused on hate? i love hating
yeah there is no particular reason for doing this one other than its funny
don't hesistate to contribute though... just do a funny...